Search results for: Pedagogy techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2556

Search results for: Pedagogy techniques

156 Exchange Rate Volatility, Its Determinants and Effects on the Manufacturing Sector in Nigeria

Authors: Chimaobi V. Okolo, Onyinye S. Ugwuanyi, Kenneth A. Okpala

Abstract:

This study evaluated the effect of exchange rate volatility on the manufacturing sector of Nigeria. The flow and stock market theories of exchange rate determination was adopted considering macroeconomic determinants such as balance of trade, trade openness, and net international investment. Furthermore, the influence of changes in parallel exchange rate, official exchange rate and real effective exchange rate was modeled on the manufacturing sector output. Vector autoregression techniques and vector error correction mechanism were adopted to explore the macroeconomic determinants of exchange rate fluctuation in Nigeria and to examine the influence of exchange rate volatility on the manufacturing sector output in Nigeria. The exchange rate showed an unstable and volatile movement in Nigeria. Official exchange rate significantly impacted on the manufacturing sector of Nigeria and shock to previous manufacturing sector output caused 60.76% of the fluctuation in the manufacturing sector output in Nigeria. Trade balance, trade openness and net international investments did not significantly determine exchange rate in Nigeria. However, own shock accounted for about 95% of the variation of exchange rate fluctuation in the short-run and long-run. Among other macroeconomic variables, net international investment accounted for about 2.85% variation of the real effective exchange rate fluctuation in the short-run and in the long-run. Monetary authorities should maintain stability of the exchange rates through proper management so as to encourage local production and government should formulate and implement policies that will develop other sectors of the economy as this will widen the country’s revenue base, reduce our over reliance on oil sector for our foreign exchange earnings and in turn reduce the shocks on our domestic economy.

Keywords: Exchange rate volatility, exchange rate determinants, manufacturing sector, official exchange rate, parallel exchange rate, real effective exchange rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930
155 Laboratory Evaluation of Asphalt Concrete Prepared with Over Burnt Brick Aggregate Treated by Zycosoil

Authors: D. Sarkar, M. Pal, A. K. Sarkar

Abstract:

Asphaltic concrete for pavement construction in India are produced by using crushed stone, gravels etc. as aggregate. In north-Eastern region of India, there is a scarcity of stone aggregate. Therefore the road engineers are always in search of an optional material as aggregate which can replace the regularly used material. The purpose of this work was to evaluate the utilization of substandard or marginal aggregates in flexible pavement construction. The investigation was undertaken to evaluate the effects of using lower quality aggregates such as over burnt brick aggregate on the preparation of asphalt concrete for flexible pavements. The scope of this work included a review of available literature and existing data, a laboratory evaluation organized to determine the effects of marginal aggregates and potential techniques to upgrade these substandard materials, and a laboratory evaluation of these upgraded marginal aggregate asphalt mixtures. Over burnt brick aggregates are water susceptible and can leads to moisture damage. Moisture damage is the progressive loss of functionality of the material owing to loss of the adhesion bond between the asphalt binder and the aggregate surface. Hence zycosoil as an anti striping additive were evaluated in this study. This study summarizes the results of the laboratory evaluation carried out to investigate the properties of asphalt concrete prepared with zycosoil modified over burnt brick aggregate. Marshall specimen were prepared with stone aggregate, zycosoil modified stone aggregate, over burnt brick aggregate and zycosoil modified over burnt brick aggregate. Results show that addition of zycosoil with stone aggregate increased stability by 6% and addition of zycosoil with over burnt brick aggregate increased stability by 30%.

Keywords: Asphalt Concrete, Over Burnt Brick Aggregate, Marshall Stability, Zycosoil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2881
154 Private Monetary Rates of Return to Humanities and Education Programs in Public Universities in Osun State, Nigeria

Authors: A. S. Adelokun, O. O. Gambo, A. A. Adegboye

Abstract:

This study estimates the private cost of Humanities and Education programs in public universities in Osun state, Nigeria, as well as the private monetary returns to Humanities and Education programs in public universities in the state. It also estimates the private rates of return to Humanities and Education programmes in public universities in Osun state; with the view of providing information on the relative profitability of investments in Humanities and Education programs in public universities in Osun state. The study adopted a descriptive survey research design. The population for the study consisted of all Humanities and Education students from public universities in Osun State and all Humanities and Education graduates who are workers in Osun state establishments. The sample was made up of 600 students and 120 workers. The students were selected through simple random sampling technique from the two public universities in the state while the workers were purposively selected from Osun state establishments. These workers were graduates of Humanities and Education programs. The selected programs included Bachelor of Arts (B.A.) in English, Bachelor of Education (B.Ed.) in English, B.A. in Religious Studies, B.Ed. in Religious Studies, B.A. in Yoruba and B.Ed. in Yoruba. Two research instruments were used, namely: Private Costs of University Education Questionnaire (PCUEQ) and Age Education Earnings of Workers Questionnaire (AEEWQ). The data were analyzed using compounding and discount cash flow techniques. The results showed that the private costs of Humanities and Education programs in public universities in Osun state were N855,935.59 and N694,269.34 respectively. The private monetary returns to Humanities and Education programs in public universities in the State were N9,052,859.28 and N9,052,859.28, respectively. The private rates of return to Humanities and Education programmes in public universities in Osun state were 27.36% and 34.40% respectively. The study concluded that it was more profitable to invest in Education programs than in Humanities programs at public universities in Osun state, Nigeria.

Keywords: Rates of return, private cost, investment, education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 445
153 Clustering for Detection of Population Groups at Risk from Anticholinergic Medication

Authors: Amirali Shirazibeheshti, Tarik Radwan, Alireza Ettefaghian, Farbod Khanizadeh, George Wilson, Cristina Luca

Abstract:

Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. This work evaluates the association between the average risk score and measures of socioeconomic status (index of multiple deprivation) and health (index of health and disability). The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, suggesting that females are more at risk from this kind of multiple medication. The risk may be monitored and controlled in a healthcare management system that is well-equipped with tools implementing appropriate techniques of artificial intelligence.

Keywords: Anticholinergic medication, socioeconomic status, deprivation, clustering, risk analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1069
152 Beneficiation of Low Grade Chromite Ore and Its Characterization for the Formation of Magnesia-Chromite Refractory by Economically Viable Process

Authors: Amit Kumar Bhandary, Prithviraj Gupta, Siddhartha Mukherjee, Mahua Ghosh Chaudhuri, Rajib Dey

Abstract:

Chromite ores are primarily used for extraction of chromium, which is an expensive metal. For low grade chromite ores (containing less than 40% Cr2O3), the chromium extraction is not usually economically viable. India possesses huge quantities of low grade chromite reserves. This deposit can be utilized after proper physical beneficiation. Magnetic separation techniques may be useful after reduction for the beneficiation of low grade chromite ore. The sample collected from the sukinda mines is characterized by XRD which shows predominant phases like maghemite, chromite, silica, magnesia and alumina. The raw ore is crushed and ground to below 75 micrometer size. The microstructure of the ore shows that the chromite grains surrounded by a silicate matrix and porosity observed the exposed side of the chromite ore. However, this ore may be utilized in refractory applications. Chromite ores contain Cr2O3, FeO, Al2O3 and other oxides like Fe-Cr, Mg-Cr have a high tendency to form spinel compounds, which usually show high refractoriness. Initially, the low grade chromite ore (containing 34.8% Cr2O3) was reduced at 1200 0C for 80 minutes with 30% coke fines by weight, before being subjected to magnetic separation. The reduction by coke leads to conversion of higher state of iron oxides converted to lower state of iron oxides. The pre-reduced samples are then characterized by XRD. The magnetically inert mass was then reacted with 20% MgO by weight at 1450 0C for 2 hours. The resultant product was then tested for various refractoriness parameters like apparent porosity, slag resistance etc. The results were satisfactory, indicating that the resultant spinel compounds are suitable for refractory applications for elevated temperature processes.

Keywords: Apparent porosity, beneficiation, low grade chromite, refractory, spinel compounds, slag resistance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2354
151 Comparative Study of Calcium Content on in vitro Biological and Antibacterial Properties of Silicon-Based Bioglass

Authors: Morteza Elsa, Amirhossein Moghanian

Abstract:

The major aim of this study was to evaluate the effect of CaO content on in vitro hydroxyapatite formation, MC3T3 cells cytotoxicity and proliferation as well as antibacterial efficiency of sol-gel derived SiO2–CaO–P2O5 ternary system. For this purpose, first two grades of bioactive glass (BG); BG-58s (mol%: 60%SiO2–36%CaO–4%P2O5) and BG-68s (mol%: 70%SiO2–26%CaO–4%P2O5)) were synthesized by sol-gel method. Second, the effect of CaO content in their composition on in vitro bioactivity was investigated by soaking the BG-58s and BG-68s powders in simulated body fluid (SBF) for time periods up to 14 days and followed by characterization inductively coupled plasma atomic emission spectrometry (ICP-AES), Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), and scanning electron microscopy (SEM) techniques. Additionally, live/dead staining, 3-(4,5dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT), and alkaline phosphatase (ALP) activity assays were conducted respectively, as qualitatively and quantitatively assess for cell viability, proliferation and differentiations of MC3T3 cells in presence of 58s and 68s BGs. Results showed that BG-58s with higher CaO content showed higher in vitro bioactivity with respect to BG-68s. Moreover, the dissolution rate was inversely proportional to oxygen density of the BG. Live/dead assay revealed that both 58s and 68s increased the mean number live cells which were in good accordance with MTT assay. Furthermore, BG-58s showed more potential antibacterial activity against methicillin-resistant Staphylococcus aureus (MRSA) bacteria. Taken together, BG-58s with enhanced MC3T3 cells proliferation and ALP activity, acceptable bioactivity and significant high antibacterial effect against MRSA bacteria is suggested as a suitable candidate in order to further functionalizing for delivery of therapeutic ions and growth factors in bone tissue engineering.

Keywords: Antibacterial, bioactive glass, hydroxyapatite, proliferation, sol-gel processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 835
150 Power and Wear Reduction Using Composite Links of Crank-Rocker Mechanism with Optimum Transmission Angle

Authors: Khaled M. Khader, Mamdouh I. Elimy

Abstract:

Reducing energy consumption became the major concern for all countries of the world during the recent decades. In general, power saving is currently the nominal goal of most industrial countries. It is well known that fossil fuels are the main pillar of development of world countries. Unfortunately, the increased rate of fossil fuel consumption will lead to serious problems caused by an expected depletion of fuels. Moreover, dangerous gases and vapors emission lead to severe environmental problems during fuel burning. Consequently, most engineering sectors especially the mechanical sectors are looking for improving any machine accompanied by reducing its energy consumption. Crank-Rocker planar mechanism is the most applied in mechanical systems. Besides, it is one of the most significant parts of the machines for obtaining the oscillatory motion. The transmission angle of this mechanism can be considered as an optimum value when its extreme values are equally varied around 90°. In addition, the transmission angle plays an important role in decreasing the required driving power and improving the dynamic properties of the mechanism. Hence, appropriate selection of mechanism links lengthens, which assures optimum transmission angle leads to decreasing the driving power. Moreover, mechanism's links manufactured from composite materials afford link's lightweight, which decreases the required driving torque. Furthermore, wear and corrosion problems can be treated through using composite links instead of using metal ones. This paper is dealing with improving the performance of crank-rocker mechanism using composite links due to their flexural elastic modulus values and stiffness in addition to high damping of composite materials.

Keywords: Composite material, crank-rocker mechanism, transmission angle, design techniques, power saving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1067
149 Probe-Assisted Axillary Lymph Node Biopsy Compared with Axillary Dissection in Breast Cancer: A Retrospective Study from the West of Iran

Authors: Morteza Alizadeh Foroutan, Hassan Moayeri, Keivan Sabooni, Motahareh Rouhi Ardeshiri

Abstract:

Breast cancer incidence is annually increasing in various parts of the world, and sentinel lymph node biopsy (SLNB) has turned into a new standard for care as a staging process in this regard. In the present study, the gamma probe technique was used for SLNB as a safe method with more accuracy and less complications. The study sought to compare the results of two surgical techniques, namely, axillary lymph node dissection (ALND) and SLNB, including epidemiological results and clinicopathological features of BC patients from the western provinces of Iran. In general, 420 BC women were identified who referred to the breast clinic in Sanandaj, Kurdistan province during 2017-2021. Of whom, 318 patients underwent breast surgery, and from these patients, 277 cases participated in the current study. Patients were divided into those undergoing ALND and SLNB. The criteria for complete dissection or axillary biopsy using the gamma probe were based on the results of clinical examinations and the presence of palpable lymph nodes. Overall complications after surgery belonged to 58 (18.9%) cases, including 15 (25.9%) and 43 (74.1%) patients in the SLNB and ALND groups, respectively (P = 0.74). Based on the findings, Seroma (60.3%) was the most reported complication in each group. Most patients had tumors in the upper-outer quadrant of their left breast. The mean of the tumor dimension in the SLNB and ALND groups was 2.1 ± 1.3 cm and 3.2 ± 1.8 cm, respectively, (P = 0.003). The benefits of breast-conserving surgery (BCS) with the SLNB technique are clearly undeniable and can be considered a method with less complications and a better prognosis. Accordingly, SLNB and BCS are favorable methods that can be performed, along with gamma probe technique, which is safe and accurate.

Keywords: Breast cancer, Sentinel lymph node biopsy, Axillary lymph node dissection, Gamma probe.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38
148 Sustainable Intensification of Agriculture in Victoria’s Food Bowl: Optimizing Productivity with the use of Decision-Support Tools

Authors: M. Johnson, R. Faggian, V. Sposito

Abstract:

A participatory and engaged approach is key in connecting agricultural managers to sustainable agricultural systems to support and optimize production in Victoria’s food bowl. A sustainable intensification (SI) approach is well documented globally, but participation rates amongst Victorian farmers is fragmentary, and key outcomes and implementation strategies are poorly understood. Improvement in decision-support management tools and a greater understanding of the productivity gains available upon implementation of SI is necessary. This paper reviews the current understanding and uptake of SI practices amongst farmers in one of Victoria’s premier food producing regions, the Goulburn Broken; and it spatially analyses the potential for this region to adapt to climate change and optimize food production. A Geographical Information Systems (GIS) approach is taken to develop an interactive decision-support tool that can be accessible to on-ground agricultural managers. The tool encompasses multiple criteria analysis (MCA) that identifies factors during the construction phase of the tool, using expert witnesses and regional knowledge, framed within an Analytical Hierarchy Process. Given the complexities of the interrelations between each of the key outcomes, this participatory approach, in which local realities and factors inform the key outcomes and help to strategies for a particular region, results in a robust strategy for sustainably intensifying production in key food producing regions. The creation of an interactive, locally embedded, decision-support management and education tool can help to close the gap between farmer knowledge and production, increase on-farm adoption of sustainable farming strategies and techniques, and optimize farm productivity.

Keywords: Agriculture, decision-support management tools, GIS, sustainable intensification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 848
147 Simulated Annealing Algorithm for Data Aggregation Trees in Wireless Sensor Networks and Comparison with Genetic Algorithm

Authors: Ladan Darougaran, Hossein Shahinzadeh, Hajar Ghotb, Leila Ramezanpour

Abstract:

In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.

Keywords: Data aggregation, wireless sensor networks, energy efficiency, simulated annealing algorithm, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
146 Urbanization and Income Inequality in Thailand

Authors: Acumsiri Tantiakrnpanit

Abstract:

This paper aims to examine the relationship between urbanization and income inequality in Thailand during the period 2002–2020, using a panel of data for 76 provinces collected from Thailand’s National Statistical Office (Labor Force Survey: LFS), as well as geospatial data from the U.S. Air Force Defense Meteorological Satellite Program (DMSP) and the Visible Infrared Imaging Radiometer Suite Day/Night band (VIIRS-DNB) satellite for 19 selected years. This paper employs two different definitions to identify urban areas: 1) Urban areas defined by Thailand's National Statistical Office (LFS), and 2) Urban areas estimated using nighttime light data from the DMSP and VIIRS-DNB satellite. The second method includes two sub-categories: 2.1) Determining urban areas by calculating nighttime light density with a population density of 300 people per square kilometer, and 2.2) Calculating urban areas based on nighttime light density corresponding to a population density of 1,500 people per square kilometer. The empirical analysis based on Ordinary Least Squares (OLS), fixed effects, and random effects models reveals a consistent U-shaped relationship between income inequality and urbanization. The findings from the econometric analysis demonstrate that urbanization or population density has a significant and negative impact on income inequality. Moreover, the square of urbanization shows a statistically significant positive impact on income inequality. Additionally, there is a negative association between logarithmically transformed income and income inequality. This paper also proposes the inclusion of satellite imagery, geospatial data, and spatial econometric techniques in future studies to conduct quantitative analysis of spatial relationships.

Keywords: Income inequality, nighttime light, population density, Thailand, urbanization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 127
145 CBIR Using Multi-Resolution Transform for Brain Tumour Detection and Stages Identification

Authors: H. Benjamin Fredrick David, R. Balasubramanian, A. Anbarasa Pandian

Abstract:

Image retrieval is the most interesting technique which is being used today in our digital world. CBIR, commonly expanded as Content Based Image Retrieval is an image processing technique which identifies the relevant images and retrieves them based on the patterns that are extracted from the digital images. In this paper, two research works have been presented using CBIR. The first work provides an automated and interactive approach to the analysis of CBIR techniques. CBIR works on the principle of supervised machine learning which involves feature selection followed by training and testing phase applied on a classifier in order to perform prediction. By using feature extraction, the image transforms such as Contourlet, Ridgelet and Shearlet could be utilized to retrieve the texture features from the images. The features extracted are used to train and build a classifier using the classification algorithms such as Naïve Bayes, K-Nearest Neighbour and Multi-class Support Vector Machine. Further the testing phase involves prediction which predicts the new input image using the trained classifier and label them from one of the four classes namely 1- Normal brain, 2- Benign tumour, 3- Malignant tumour and 4- Severe tumour. The second research work includes developing a tool which is used for tumour stage identification using the best feature extraction and classifier identified from the first work. Finally, the tool will be used to predict tumour stage and provide suggestions based on the stage of tumour identified by the system. This paper presents these two approaches which is a contribution to the medical field for giving better retrieval performance and for tumour stages identification.

Keywords: Brain tumour detection, content based image retrieval, classification of tumours, image retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773
144 Job in Modern Arabic Poetry: A Semantic and Comparative Approach to Two Poems Referring to the Poet Al-Sayyab

Authors: Jeries Khoury

Abstract:

The use of legendary, folkloric and religious symbols is one of the most important phenomena in modern Arabic poetry. Interestingly enough, most of the modern Arabic poetry’s pioneers were so fascinated by the biblical symbols and they managed to use many modern techniques to make these symbols adequate for their personal life from one side and fit to their Islamic beliefs from the other. One of the most famous poets to do so was al-Sayya:b. The way he employed one of these symbols ‘job’, the new features he adds to this character and the link between this character and his personal life will be discussed in this study. Besides, the study will examine the influence of al-Sayya:b on another modern poet Saadi Yusuf, who, following al-Sayya:b, used the character of Job in a special way, by mixing its features with al-Sayya:b’s personal features and in this way creating a new mixed character. A semantic, cultural and comparative analysis of the poems written by al-Sayya:b himself and the other poets who evoked the mixed image of al-Sayya:b-Job, can reveal the changes Arab poets made to the original biblical figure of Job to bring it closer to Islamic culture. The paper will make an intensive use of intertextuality idioms in order to shed light on the network of relations between three kinds of texts (indeed three palimpsests’: 1- biblical- the primary text; 2- poetic- al-Syya:b’s secondary version; 3- re-poetic- Sa’di Yusuf’s tertiary version). The bottom line in this paper is that that al-Sayya:b was directly influenced by the dramatic biblical story of Job more than the brief Quranic version of the story. In fact, the ‘new’ character of Job designed by al-Sayya:b himself differs from the original one in many aspects that we can safely say it is the Sayyabian-Job that cannot be found in the poems of any other poets, unless they are evoking the own tragedy of al-Sayya:b himself, like what Saadi Yusuf did.

Keywords: Arabic poetry, intertextuality, job, meter, modernism, symbolism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 655
143 Optimization of Quercus cerris Bark Liquefaction

Authors: Luísa P. Cruz-Lopes, Hugo Costa e Silva, Idalina Domingos, José Ferreira, Luís Teixeira de Lemos, Bruno Esteves

Abstract:

The liquefaction process of cork based tree barks has led to an increase of interest due to its potential innovation in the lumber and wood industries. In this particular study the bark of Quercus cerris (Turkish oak) is used due to its appreciable amount of cork tissue, although of inferior quality when compared to the cork provided by other Quercus trees. This study aims to optimize alkaline catalysis liquefaction conditions, regarding several parameters. To better comprehend the possible chemical characteristics of the bark of Quercus cerris, a complete chemical analysis was performed. The liquefaction process was performed in a double-jacket reactor heated with oil, using glycerol and a mixture of glycerol/ethylene glycol as solvents, potassium hydroxide as a catalyst, and varying the temperature, liquefaction time and granulometry. Due to low liquefaction efficiency resulting from the first experimental procedures a study was made regarding different washing techniques after the filtration process using methanol and methanol/water. The chemical analysis stated that the bark of Quercus cerris is mostly composed by suberin (ca. 30%) and lignin (ca. 24%) as well as insolvent hemicelluloses in hot water (ca. 23%). On the liquefaction stage, the results that led to higher yields were: using a mixture of methanol/ethylene glycol as reagents and a time and temperature of 120 minutes and 200 ºC, respectively. It is concluded that using a granulometry of <80 mesh leads to better results, even if this parameter barely influences the liquefaction efficiency. Regarding the filtration stage, washing the residue with methanol and then distilled water leads to a considerable increase on final liquefaction percentages, which proves that this procedure is effective at liquefying suberin content and lignocellulose fraction.

Keywords: Liquefaction, alkaline catalysis, optimization, Quercus cerris bark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
142 Visual Study on Flow Patterns and Heat Transfer during Convective Boiling Inside Horizontal Smooth and Microfin Tubes

Authors: V.D. Hatamipour, M.A. Akhavan-Behabadi

Abstract:

Evaporator is an important and widely used heat exchanger in air conditioning and refrigeration industries. Different methods have been used by investigators to increase the heat transfer rates in evaporators. One of the passive techniques to enhance heat transfer coefficient is the application of microfin tubes. The mechanism of heat transfer augmentation in microfin tubes is dependent on the flow regime of two-phase flow. Therefore many investigations of the flow patterns for in-tube evaporation have been reported in literatures. The gravitational force, surface tension and the vapor-liquid interfacial shear stress are known as three dominant factors controlling the vapor and liquid distribution inside the tube. A review of the existing literature reveals that the previous investigations were concerned with the two-phase flow pattern for flow boiling in horizontal tubes [12], [9]. Therefore, the objective of the present investigation is to obtain information about the two-phase flow patterns for evaporation of R-134a inside horizontal smooth and microfin tubes. Also Investigation of heat transfer during flow boiling of R-134a inside horizontal microfin and smooth tube have been carried out experimentally The heat transfer coefficients for annular flow in the smooth tube is shown to agree well with Gungor and Winterton-s correlation [4]. All the flow patterns occurred in the test can be divided into three dominant regimes, i.e., stratified-wavy flow, wavy-annular flow and annular flow. Experimental data are plotted in two kinds of flow maps, i.e., Weber number for the vapor versus weber number for the liquid flow map and mass flux versus vapor quality flow map. The transition from wavy-annular flow to annular or stratified-wavy flow is identified in the flow maps.

Keywords: Flow boiling, Flow pattern, Heat transfer, Horizontal, Smooth tube, Microfin tube.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2328
141 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: linked open data, information integration, digital libraries, data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
140 Qualitative Parametric Comparison of Load Balancing Algorithms in Parallel and Distributed Computing Environment

Authors: Amit Chhabra, Gurvinder Singh, Sandeep Singh Waraich, Bhavneet Sidhu, Gaurav Kumar

Abstract:

Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. One of the biggest issues in such systems is the development of effective techniques/algorithms for the distribution of the processes/load of a parallel program on multiple hosts to achieve goal(s) such as minimizing execution time, minimizing communication delays, maximizing resource utilization and maximizing throughput. Substantive research using queuing analysis and assuming job arrivals following a Poisson pattern, have shown that in a multi-host system the probability of one of the hosts being idle while other host has multiple jobs queued up can be very high. Such imbalances in system load suggest that performance can be improved by either transferring jobs from the currently heavily loaded hosts to the lightly loaded ones or distributing load evenly/fairly among the hosts .The algorithms known as load balancing algorithms, helps to achieve the above said goal(s). These algorithms come into two basic categories - static and dynamic. Whereas static load balancing algorithms (SLB) take decisions regarding assignment of tasks to processors based on the average estimated values of process execution times and communication delays at compile time, Dynamic load balancing algorithms (DLB) are adaptive to changing situations and take decisions at run time. The objective of this paper work is to identify qualitative parameters for the comparison of above said algorithms. In future this work can be extended to develop an experimental environment to study these Load balancing algorithms based on comparative parameters quantitatively.

Keywords: SLB, DLB, Host, Algorithm and Load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
139 A Multi-Modal Virtual Walkthrough of the Virtual Past and Present Based on Panoramic View, Crowd Simulation and Acoustic Heritage on Mobile Platform

Authors: Lim Chen Kim, Tan Kian Lam, Chan Yi Chee

Abstract:

This research presents a multi-modal simulation in the reconstruction of the past and the construction of present in digital cultural heritage on mobile platform. In bringing the present life, the virtual environment is generated through a presented scheme for rapid and efficient construction of 360° panoramic view. Then, acoustical heritage model and crowd model are presented and improvised into the 360° panoramic view. For the reconstruction of past life, the crowd is simulated and rendered in an old trading port. However, the keystone of this research is in a virtual walkthrough that shows the virtual present life in 2D and virtual past life in 3D, both in an environment of virtual heritage sites in George Town through mobile device. Firstly, the 2D crowd is modelled and simulated using OpenGL ES 1.1 on mobile platform. The 2D crowd is used to portray the present life in 360° panoramic view of a virtual heritage environment based on the extension of Newtonian Laws. Secondly, the 2D crowd is animated and rendered into 3D with improved variety and incorporated into the virtual past life using Unity3D Game Engine. The behaviours of the 3D models are then simulated based on the enhancement of the classical model of Boid algorithm. Finally, a demonstration system is developed and integrated with the models, techniques and algorithms of this research. The virtual walkthrough is demonstrated to a group of respondents and is evaluated through the user-centred evaluation by navigating around the demonstration system. The results of the evaluation based on the questionnaires have shown that the presented virtual walkthrough has been successfully deployed through a multi-modal simulation and such a virtual walkthrough would be particularly useful in a virtual tour and virtual museum applications.

Keywords: Boid algorithm, crowd simulation, mobile platform, Newtonian laws, virtual heritage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
138 Manodharmam: A Scientific Methodology for Improvisation and Cognition in Carnatic Music

Authors: Raghavi Janaswamy, Saraswathi K. Vasudev

Abstract:

Music is ubiquitous in human lives. Ever since the foetus hears the sound inside the mother’s womb and later upon birth the baby experiences alluring sounds, the curiosity of learning emanates and evokes exploration. Music is an education than a mere entertainment. The intricate balance between music, education and entertainment has well been recognized by the scientific community and is being explored as a viable tool to understand and improve the human cognition. There are seven basic swaras (notes) Sa, Ri, Ga, Ma, Pa, Da and Ni in the Carnatic music system that are analogous to C, D, E, F, G, A and B of the western system. The Carnatic music builds on the conscious use of microtones, gamakams (oscillation) and rendering styles that evolved over centuries and established its stance. The complex but erudite raga system has been designed with elaborate experiments on srutis (musical sounds) and human perception abilities. In parallel, ‘rasa’- the emotions evoked by certain srutis and hence the ragas been solidified along with the power of language in combination with the musical sounds. The Carnatic music branches out as Kalpita sangeetam (pre-composed music) and Manodharma sangeetam (improvised music). This article explores the Manodharma sangeetam and its subdivisions such as raga alapana, swara kalpana, neraval and ragam-tanam-pallavi (RTP). The intrinsic mathematical strategies in its practice methods toward improvising the music have been discussed in detail with concert examples. The techniques on swara weaving for swara kalpana rendering and methods on the alapana development are also discussed at length with an emphasis on the impact on the human cognitive abilities. The articulation of the outlined conscious practice methods not only helps to leave a long-lasting melodic impression on the listeners but also onsets cognitive developments.

Keywords: Carnatic, Manodharmam, music cognition, Alapana.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 630
137 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: Fake news detection, feature selection, support vector machine, K-means clustering, machine learning, social media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4524
136 Geostatistical Analysis and Mapping of Groundlevel Ozone in a Medium Sized Urban Area

Authors: F. J. Moral García, P. Valiente González, F. López Rodríguez

Abstract:

Ground-level tropospheric ozone is one of the air pollutants of most concern. It is mainly produced by photochemical processes involving nitrogen oxides and volatile organic compounds in the lower parts of the atmosphere. Ozone levels become particularly high in regions close to high ozone precursor emissions and during summer, when stagnant meteorological conditions with high insolation and high temperatures are common. In this work, some results of a study about urban ozone distribution patterns in the city of Badajoz, which is the largest and most industrialized city in Extremadura region (southwest Spain) are shown. Fourteen sampling campaigns, at least one per month, were carried out to measure ambient air ozone concentrations, during periods that were selected according to favourable conditions to ozone production, using an automatic portable analyzer. Later, to evaluate the ozone distribution at the city, the measured ozone data were analyzed using geostatistical techniques. Thus, first, during the exploratory analysis of data, it was revealed that they were distributed normally, which is a desirable property for the subsequent stages of the geostatistical study. Secondly, during the structural analysis of data, theoretical spherical models provided the best fit for all monthly experimental variograms. The parameters of these variograms (sill, range and nugget) revealed that the maximum distance of spatial dependence is between 302-790 m and the variable, air ozone concentration, is not evenly distributed in reduced distances. Finally, predictive ozone maps were derived for all points of the experimental study area, by use of geostatistical algorithms (kriging). High prediction accuracy was obtained in all cases as cross-validation showed. Useful information for hazard assessment was also provided when probability maps, based on kriging interpolation and kriging standard deviation, were produced.

Keywords: Kriging, map, tropospheric ozone, variogram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868
135 Combined Source and Channel Coding for Image Transmission Using Enhanced Turbo Codes in AWGN and Rayleigh Channel

Authors: N. S. Pradeep, M. Balasingh Moses, V. Aarthi

Abstract:

Any signal transmitted over a channel is corrupted by noise and interference. A host of channel coding techniques has been proposed to alleviate the effect of such noise and interference. Among these Turbo codes are recommended, because of increased capacity at higher transmission rates and superior performance over convolutional codes. The multimedia elements which are associated with ample amount of data are best protected by Turbo codes. Turbo decoder employs Maximum A-posteriori Probability (MAP) and Soft Output Viterbi Decoding (SOVA) algorithms. Conventional Turbo coded systems employ Equal Error Protection (EEP) in which the protection of all the data in an information message is uniform. Some applications involve Unequal Error Protection (UEP) in which the level of protection is higher for important information bits than that of other bits. In this work, enhancement to the traditional Log MAP decoding algorithm is being done by using optimized scaling factors for both the decoders. The error correcting performance in presence of UEP in Additive White Gaussian Noise channel (AWGN) and Rayleigh fading are analyzed for the transmission of image with Discrete Cosine Transform (DCT) as source coding technique. This paper compares the performance of log MAP, Modified log MAP (MlogMAP) and Enhanced log MAP (ElogMAP) algorithms used for image transmission. The MlogMAP algorithm is found to be best for lower Eb/N0 values but for higher Eb/N0 ElogMAP performs better with optimized scaling factors. The performance comparison of AWGN with fading channel indicates the robustness of the proposed algorithm. According to the performance of three different message classes, class3 would be more protected than other two classes. From the performance analysis, it is observed that ElogMAP algorithm with UEP is best for transmission of an image compared to Log MAP and MlogMAP decoding algorithms.

Keywords: AWGN, BER, DCT, Fading, MAP, UEP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
134 Professional Burn out of Teachers: Reasons and Regularities

Authors: Dabyltayeva R. Y., Smatova K.B., Кabekenov G., Toleshova U., Shagyrbayeva M.

Abstract:

In recent years in Kazakhstan, as well as in all countries, we have been talking not only about the professional stress, but also professional Burnout Syndrome of employees. Burnout is essentially a response to chronic emotional stress – manifests itself in the form of chronic fatigue, despondency, unmotivated aggression, anger, and others. This condition is due to mental fatigue among teachers as a sort of payment for overstrain when professional commitments include the impact of “heat your soul", emotional investment. The emergence of professional Burnout among teachers is due to the system of interrelated and mutually reinforcing factors relating to the various levels of the personality: individually-psychological level is psychodynamic special subject characteristics of valuemotivational sphere and formation of skills and habits of selfregulation; the socio-psychological level includes especially the Organization and interpersonal interaction of a teacher. Signs of the Burnout were observed in 15 testees, and virtually a symptom could be observed in every teacher. As a result of the diagnosis 48% of teachers had the signs of stress (phase syndrome), resulting in a sense of anxiety, mood, heightened emotional susceptibility. The following results have also been got:-the fall of General energy potential – 14 pers. -Psychosomatic and psycho vegetative syndrome – 26 pers. -emotional deficit-34 pers. -emotional Burnout Syndrome-6 pers. The problem of professional Burnout of teachers in the current conditions should become not only meaningful, but particularly relevant. The quality of education of the younger generation depends on professional development; teachers- training level, and how “healthy" teachers are. That is why the systematic maintenance of pedagogic-professional development for teachers (including disclosure of professional Burnout Syndrome factors) takes on a special meaning.

Keywords: Professional burnout syndrome, adaptive syndrome, stage of depletion syndrome, symptoms and characteristics of burnout, prophylactic of professional destruction techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2105
133 Performance Analysis of Ferrocement Retrofitted Masonry Wall Units under Cyclic Loading

Authors: Raquib Ahsan, Md. Mahir Asif, Md. Zahidul Alam

Abstract:

A huge portion of old masonry buildings in Bangladesh are vulnerable to earthquake. In most of the cases these buildings contain unreinforced masonry wall which are most likely to be subjected to earthquake damages. Due to deterioration of mortar joint and aging, shear resistance of these unreinforced masonry walls dwindle. So, retrofitting of these old buildings has become an important issue. Among many researched and experimented techniques, ferrocement retrofitting can be a low cost technique in context of the economic condition of Bangladesh. This study aims at investigating the behavior of ferrocement retrofitted unconfined URM walls under different types of cyclic loading. Four 725 mm × 725 mm masonry wall units were prepared with bricks jointed by stretcher bond with 12.5 mm mortar between two adjacent layers of bricks. To compare the effectiveness of ferrocement retrofitting a particular type wire mesh was used in this experiment which is 20 gauge woven wire mesh with 12.5 mm × 12.5 mm square opening. After retrofitting with ferrocement these wall units were tested by applying cyclic deformation along the diagonals of the specimens. Then a comparative study was performed between the retrofitted specimens and control specimens for both partially reversed cyclic load condition and cyclic compression load condition. The experiment results show that ultimate load carrying capacities of ferrocement retrofitted specimens are 35% and 27% greater than the control specimen under partially reversed cyclic loading and cyclic compression respectively. And before failure the deformations of ferrocement retrofitted specimens are 43% and 33% greater than the control specimen under reversed cyclic loading and cyclic compression respectively. Therefore, the test results show that the ultimate load carrying capacity and ductility of ferrocement retrofitted specimens have improved.

Keywords: Cyclic compression, ferrocement, masonry wall, partially reversed cyclic load, retrofitting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 921
132 Assessment of Wastewater Reuse Potential for an Enamel Coating Industry

Authors: Guclu Insel, Efe Gumuslu, Gulten Yuksek, Nilay Sayi Ucar, Emine Ubay Cokgor, Tugba Olmez Hanci, Didem Okutman Tas, Fatos Germirli Babuna, Derya Firat Ertem, Okmen Yildirim, Ozge Erturan, Betul Kirci

Abstract:

In order to eliminate water scarcity problems, effective precautions must be taken. Growing competition for water is increasingly forcing facilities to tackle their own water scarcity problems. At this point, application of wastewater reclamation and reuse results in considerable economic advantageous. In this study, an enamel coating facility, which is one of the high water consumed facilities, is evaluated in terms of its wastewater reuse potential. Wastewater reclamation and reuse can be defined as one of the best available techniques for this sector. Hence, process and pollution profiles together with detailed characterization of segregated wastewater sources are appraised in a way to find out the recoverable effluent streams arising from enamel coating operations. Daily, 170 m3 of process water is required and 160 m3 of wastewater is generated. The segregated streams generated by two enamel coating processes are characterized in terms of conventional parameters. Relatively clean segregated wastewater streams (reusable wastewaters) are separately collected and experimental treatability studies are conducted on it. The results reflected that the reusable wastewater fraction has an approximate amount of 110 m3/day that accounts for 68% of the total wastewaters. The need for treatment applicable on reusable wastewaters is determined by considering water quality requirements of various operations and characterization of reusable wastewater streams. Ultra-filtration (UF), Nano-filtration (NF) and Reverse Osmosis (RO) membranes are subsequently applied on reusable effluent fraction. Adequate organic matter removal is not obtained with the mentioned treatment sequence.

Keywords: enamel coating, membrane, reuse, wastewater

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
131 Estimating the Costs of Conservation in Multiple Output Agricultural Setting

Authors: T. Chaiechi, N. Stoeckl

Abstract:

Scarcity of resources for biodiversity conservation gives rise to the need of strategic investment with priorities given to the cost of conservation. While the literature provides abundant methodological options for biodiversity conservation; estimating true cost of conservation remains abstract and simplistic, without recognising dynamic nature of the cost. Some recent works demonstrate the prominence of economic theory to inform biodiversity decisions, particularly on the costs and benefits of biodiversity however, the integration of the concept of true cost into biodiversity actions and planning are very slow to come by, and specially on a farm level. Conservation planning studies often use area as a proxy for costs neglecting different land values as well as protected areas. These literature consider only heterogeneous benefits while land costs are considered homogenous. Analysis with the assumption of cost homogeneity results in biased estimation; since not only it doesn’t address the true total cost of biodiversity actions and plans, but also it fails to screen out lands that are more (or less) expensive and/or difficult (or more suitable) for biodiversity conservation purposes, hindering validity and comparability of the results. Economies of scope” is one of the other most neglected aspects in conservation literature. The concept of economies of scope introduces the existence of cost complementarities within a multiple output production system and it suggests a lower cost during the concurrent production of multiple outputs by a given farm. If there are, indeed, economies of scope then simplistic representation of costs will tend to overestimate the true cost of conservation leading to suboptimal outcomes. The aim of this paper, therefore, is to provide first road review of the various theoretical ways in which economies of scope are likely to occur of how they might occur in conservation. Consequently, the paper addresses gaps that have to be filled in future analysis.

Keywords: Cost, biodiversity conservation, Multi-output production systems, Empirical techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206
130 A Temporal QoS Ontology for ERTMS/ETCS

Authors: Marc Sango, Olimpia Hoinaru, Christophe Gransart, Laurence Duchien

Abstract:

Ontologies offer a means for representing and sharing information in many domains, particularly in complex domains. For example, it can be used for representing and sharing information of System Requirement Specification (SRS) of complex systems like the SRS of ERTMS/ETCS written in natural language. Since this system is a real-time and critical system, generic ontologies, such as OWL and generic ERTMS ontologies provide minimal support for modeling temporal information omnipresent in these SRS documents. To support the modeling of temporal information, one of the challenges is to enable representation of dynamic features evolving in time within a generic ontology with a minimal redesign of it. The separation of temporal information from other information can help to predict system runtime operation and to properly design and implement them. In addition, it is helpful to provide a reasoning and querying techniques to reason and query temporal information represented in the ontology in order to detect potential temporal inconsistencies. To address this challenge, we propose a lightweight 3-layer temporal Quality of Service (QoS) ontology for representing, reasoning and querying over temporal and non-temporal information in a complex domain ontology. Representing QoS entities in separated layers can clarify the distinction between the non QoS entities and the QoS entities in an ontology. The upper generic layer of the proposed ontology provides an intuitive knowledge of domain components, specially ERTMS/ETCS components. The separation of the intermediate QoS layer from the lower QoS layer allows us to focus on specific QoS Characteristics, such as temporal or integrity characteristics. In this paper, we focus on temporal information that can be used to predict system runtime operation. To evaluate our approach, an example of the proposed domain ontology for handover operation, as well as a reasoning rule over temporal relations in this domain-specific ontology, are presented.

Keywords: System Requirement Specification, ERTMS/ETCS, Temporal Ontologies, Domain Ontologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3135
129 Design and Development of Constant Stress Composite Cantilever Beam

Authors: Vinod B. Suryawanshi, Ajit D. Kelkar

Abstract:

Composite materials, due to their unique properties such as high strength to weight ratio, corrosion resistance, and impact resistance have huge potential as structural materials in automotive, construction and transportation applications. However, these properties often come at higher cost owing to complex design methods, difficult manufacturing processes and raw material cost. Traditionally, tapered laminated composite structures are manufactured using autoclave manufacturing process by ply drop off technique. Autoclave manufacturing though very powerful suffers from high capital investment and higher energy consumption. As per the current trends in composite manufacturing, Out of Autoclave (OoA) processes are looked as emerging technologies for manufacturing the structural composite components for aerospace and defense applications. However, there is a need for improvement among these processes to make them reliable and consistent. In this paper, feasibility of using out of autoclave process to manufacture the variable thickness cantilever beam is discussed. The minimum weight design for the composite beam is obtained using constant stress beam concept by tailoring the thickness of the beam. Ply drop off techniques was used to fabricate the variable thickness beam from glass/epoxy prepregs. Experiments were conducted to measure bending stresses along the span of the cantilever beam at different intervals by applying the concentrated load at the free end. Experimental results showed that the stresses in the bean at different intervals were constant. This proves the ability of OoA process to manufacture the constant stress beam. Finite element model for the constant stress beam was developed using commercial finite element simulation software. It was observed that the simulation results agreed very well with the experimental results and thus validated design and manufacturing approach used.

Keywords: Beams, Composites, Constant Stress, Structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4393
128 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR

Authors: Pascal Mwenge, Tumisang Seodigeng

Abstract:

The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.

Keywords: Biodiesel, calibration, chemometrics, FTIR, methanolysis, multivariate analysis, transesterification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 934
127 Meta Model Based EA for Complex Optimization

Authors: Maumita Bhattacharya

Abstract:

Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiency

Keywords: Meta model, Evolutionary algorithm, Stochastictechnique, Fitness function, Optimization, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066