Search results for: response spectrum method.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9390

Search results for: response spectrum method.

240 Microalbuminuria in Essential Hypertension

Authors: Sharan Badiger, Prema T. Akkasaligar, Sandeep HM, Biradar MS

Abstract:

Essential hypertension (HTN) usually clusters with other cardiovascular risk factors such as age, overweight, diabetes, insulin resistance and dyslipidemia. The target organ damage (TOD) such as left ventricular hypertrophy, microalbuminuria (MA), acute coronary syndrome (ACS), stroke and cognitive dysfunction takes place early in course of hypertension. Though the prevalence of hypertension is high in India, the relationship between microalbuminuria and target organ damage in hypertension is not well studied. This study aim at detecting MA in essential hypertension and its relation to severity of HTN, duration of HTN, body mass index (BMI), age and TOD such as HTN retinopathy and acute coronary syndrome The present study was done in 100 patients of essential hypertension non diabetics admitted to B.L.D.E.University-s Sri B.M.Patil Medical College, Bijapur, from October 2008 to April 2011. The patients underwent detailed history and clinical examination. Early morning 5 ml of urine sample was collected & MA was estimated by immunoturbidometry method. The relationship of MA with the duration & severity of HTN, BMI, age, sex and TOD's like hypertensive retinopathy, ACS was assessed by univariate analysis. The prevalence of MA in this study was found to be 63 %. In that 42% were male & 21% were female. In this study a significant association between MA and the duration of hypertension (p = 0.036) & (OR =0.438). Longer the duration of hypertension, more possibility of microalbumin in urine. Also there was a significant association between severity of hypertension and MA (p=0.045) and (OR=0.093). MA was positive in 50 (79.4%) patients out of 63, whose blood pressure was >160/100 mm Hg. In this study a significant association between MA and the grades of hypertensive retinopathy (p =0.011) and acute coronary syndrome (p = 0.041) (OR =2.805). Gender and BMI did not pose high risk for MA in this study.The prevalence of MA in essential hypertension is high in this part of the community and MA will increase the risk of developing target organ damage.Early screening of patients with essential hypertension for MA and aggressive management of positive cases might reduce the burden of chronic kidney diseases and cardiovascular diseases in the community.

Keywords: Acute coronary syndrome, Essential hypertension, Microalbuminuria, Target organ damage

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2341
239 Precipitation Intensity: Duration Based Threshold Analysis for Initiation of Landslides in Upper Alaknanda Valley

Authors: Soumiya Bhattacharjee, P. K. Champati Ray, Shovan L. Chattoraj, Mrinmoy Dhara

Abstract:

The entire Himalayan range is globally renowned for rainfall-induced landslides. The prime focus of the study is to determine rainfall based threshold for initiation of landslides that can be used as an important component of an early warning system for alerting stake holders. This research deals with temporal dimension of slope failures due to extreme rainfall events along the National Highway-58 from Karanprayag to Badrinath in the Garhwal Himalaya, India. Post processed 3-hourly rainfall intensity data and its corresponding duration from daily rainfall data available from Tropical Rainfall Measuring Mission (TRMM) were used as the prime source of rainfall data. Landslide event records from Border Road Organization (BRO) and some ancillary landslide inventory data for 2013 and 2014 have been used to determine Intensity Duration (ID) based rainfall threshold. The derived governing threshold equation, I= 4.738D-0.025, has been considered for prediction of landslides of the study region. This equation was validated with an accuracy of 70% landslides during August and September 2014. The derived equation was considered for further prediction of landslides of the study region. From the obtained results and validation, it can be inferred that this equation can be used for initiation of landslides in the study area to work as a part of an early warning system. Results can significantly improve with ground based rainfall estimates and better database on landslide records. Thus, the study has demonstrated a very low cost method to get first-hand information on possibility of impending landslide in any region, thereby providing alert and better preparedness for landslide disaster mitigation.

Keywords: Landslide, intensity-duration, rainfall threshold, Tropical Rainfall Measuring Mission, slope, inventory, early warning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1190
238 Prioritization Assessment of Housing Development Risk Factors: A Fuzzy Hierarchical Process-Based Approach

Authors: Yusuf Garba Baba

Abstract:

The construction industry and housing subsector are fraught with risks that have the potential of negatively impacting on the achievement of project objectives. The success or otherwise of most construction projects depends to large extent on how well these risks have been managed. The recent paradigm shift by the subsector to use of formal risk management approach in contrast to hitherto developed rules of thumb means that risks must not only be identified but also properly assessed and responded to in a systematic manner. The study focused on identifying risks associated with housing development projects and prioritisation assessment of the identified risks in order to provide basis for informed decision. The study used a three-step identification framework: review of literature for similar projects, expert consultation and questionnaire based survey to identify potential risk factors. Delphi survey method was employed in carrying out the relative prioritization assessment of the risks factors using computer-based Analytical Hierarchical Process (AHP) software. The results show that 19 out of the 50 risks significantly impact on housing development projects. The study concludes that although significant numbers of risk factors have been identified as having relevance and impacting to housing construction projects, economic risk group and, in particular, ‘changes in demand for houses’ is prioritised by most developers as posing a threat to the achievement of their housing development objectives. Unless these risks are carefully managed, their effects will continue to impede success in these projects. The study recommends the adoption and use of the combination of multi-technique identification framework and AHP prioritization assessment methodology as a suitable model for the assessment of risks in housing development projects.

Keywords: Risk identification, risk assessment, analytical hierarchical process, multi-criteria decision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 686
237 Validation on 3D Surface Roughness Algorithm for Measuring Roughness of Psoriasis Lesion

Authors: M.H. Ahmad Fadzil, Esa Prakasa, Hurriyatul Fitriyah, Hermawan Nugroho, Azura Mohd Affandi, S.H. Hussein

Abstract:

Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.

Keywords: psoriasis, roughness algorithm, polynomial surfacefitting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2451
236 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 369
235 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers’ equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 466
234 Modeling a Multinomial Logit Model of Intercity Travel Mode Choice Behavior for All Trips in Libya

Authors: Manssour A. Abdulsalam Bin Miskeen, Ahmed Mohamed Alhodairi, Riza Atiq Abdullah Bin O. K. Rahmat

Abstract:

In the planning point of view, it is essential to have mode choice, due to the massive amount of incurred in transportation systems. The intercity travellers in Libya have distinct features, as against travellers from other countries, which includes cultural and socioeconomic factors. Consequently, the goal of this study is to recognize the behavior of intercity travel using disaggregate models, for projecting the demand of nation-level intercity travel in Libya. Multinomial Logit Model for all the intercity trips has been formulated to examine the national-level intercity transportation in Libya. The Multinomial logit model was calibrated using nationwide revealed preferences (RP) and stated preferences (SP) survey. The model was developed for deference purpose of intercity trips (work, social and recreational). The variables of the model have been predicted based on maximum likelihood method. The data needed for model development were obtained from all major intercity corridors in Libya. The final sample size consisted of 1300 interviews. About two-thirds of these data were used for model calibration, and the remaining parts were used for model validation. This study, which is the first of its kind in Libya, investigates the intercity traveler’s mode-choice behavior. The intercity travel mode-choice model was successfully calibrated and validated. The outcomes indicate that, the overall model is effective and yields higher precision of estimation. The proposed model is beneficial, due to the fact that, it is receptive to a lot of variables, and can be employed to determine the impact of modifications in the numerous characteristics on the need for various travel modes. Estimations of the model might also be of valuable to planners, who can estimate possibilities for various modes and determine the impact of unique policy modifications on the need for intercity travel.

Keywords: Multinomial logit model, improved intercity transport, intercity mode-choice behavior, disaggregate analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7796
233 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru

Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar

Abstract:

Nowadays, Heritage Building Information Modeling (HBIM) is considered an efficient tool to represent and manage information of Cultural Heritage (CH). The basis of this tool relies on a 3D model generally obtained from a Cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired Level of Development (LOD), Level of Information (LOI), Grade of Generation (GOG) as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models’ families respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources, since the BIM software used has a free student license.

Keywords: Cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 826
232 Stress Analysis of Hexagonal Element for Precast Concrete Pavements

Authors: J. Novak, A. Kohoutkova, V. Kristek, J. Vodicka, M. Sramek

Abstract:

While the use of cast-in-place concrete for an airfield and highway pavement overlay is very common, the application of precast concrete elements is very limited today. The main reasons consist of high production costs and complex structural behavior. Despite that, several precast concrete systems have been developed and tested with the aim to provide a system with rapid construction. The contribution deals with the reinforcement design of a hexagonal element developed for a proposed airfield pavement system. The sub-base course of the system is composed of compacted recycled concrete aggregates and fiber reinforced concrete with recycled aggregates place on top of it. The selected element belongs to a group of precast concrete elements which are being considered for the construction of a surface course. Both high costs of full-scale experiments and the need to investigate various elements force to simulate their behavior in a numerical analysis software by using finite element method instead of performing expensive experiments. The simulation of the selected element was conducted on a nonlinear model in order to obtain such results which could fully compensate results from experiments. The main objective was to design reinforcement of the precast concrete element subject to quasi-static loading from airplanes with respect to geometrical imperfections, manufacturing imperfections, tensile stress in reinforcement, compressive stress in concrete and crack width. The obtained findings demonstrate that the position and the presence of imperfection in a pavement highly affect the stress distribution in the precast concrete element. The precast concrete element should be heavily reinforced to fulfill all the demands. Using under-reinforced concrete elements would lead to the formation of wide cracks and cracks permanently open.

Keywords: Imperfection, numerical simulation, pavement, precast concrete element, reinforcement design, stress analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698
231 21st Century Biotechnological Research and Development Advancements for Industrial Development in India

Authors: Monisha Isaac

Abstract:

Biotechnology is a discipline which explains the use of living organisms and systems to construct a product, or we can define it as an application or technology developed to use biological systems and organisms processes for a specific use. Particularly, it includes cells and its components use for new technologies and inventions. The tools developed can be further used in diverse fields such as agriculture, industry, research and hospitals etc. The 21st century has seen a drastic development and advancement in biotechnology in India. Significant increase in Government of India’s outlays for biotechnology over the past decade has been observed. A sectoral break up of biotechnology-based companies in India shows that most of the companies are agriculture-based companies having interests ranging from tissue culture to biopesticides. Major attention has been given by the companies in health related activities and in environmental biotechnology. The biopharmaceutical, which comprises of vaccines, diagnostic, and recombinant products is the most reliable and largest segment of the Indian Biotech industry. India has developed its vaccine markets and supplies them to various countries. Then there are the bio-services, which mainly comprise of contract researches and manufacturing services. India has made noticeable developments in the field of bio industries including manufacturing of enzymes, biofuels and biopolymers. Biotechnology is also playing a crucial and significant role in the field of agriculture. Traditional methods have been replaced by new technologies that mainly focus on GM crops, marker assisted technologies and the use of biotechnological tools to improve the quality of fertilizers and soil. It may only be a small contributor but has shown to have huge potential for growth. Bioinformatics is a computational method which helps to store, manage, arrange and design tools to interpret the extensive data gathered through experimental trials, making it important in the design of drugs.

Keywords: Biotechnology, advancement, agriculture, bio-services, bio-industries, bio-pharmaceuticals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038
230 Bronchospasm Analysis Following the Implementation of a Program of Maximum Aerobic Exercise in Active Men

Authors: Sajjad Shojaeidoust, Mohsen Ghanbarzadeh, Abdolhamid Habibi

Abstract:

Exercise-induced bronchospasm (EIB) is a transitory condition of airflow obstruction that is associated with physical activities. It is noted that high ventilation can lead to an increase in the heat and reduce in the moisture in airways resistance of trachea. Also causes of pathophysiological mechanism are EIB. Accordingly, studying some parameters of pulmonary function (FVC, FEV1) among active people seems quintessential. The aim of this study was to analyze bronchospasm following the implementation of a program of maximum aerobic exercise in active men at Chamran University of Ahwaz. Method: In this quasi-experimental study, the population consisted of all students at Chamran University. Among from 55 participants, of which, 15 were randomly selected as the experimental group. In this study, the size of the maximum oxygen consumption was initially measured, and then, based on the maximum oxygen consumed, the active individuals were identified. After five minutes’ warm-up, Strand treadmill exercise test was taken (one session) and pulmonary parameters were measured at both pre- and post-tests (spirometer). After data normalization using KS and non-normality of the data, the Wilcoxon test was used to analyze the data. The significance level for all statistical surveys was considered p≤0/05. Results: The results showed that the ventilation factors and bronchospasm (FVC, FEV1) in the pre-test and post-test resulted in no significant difference among the active people (p≥0/05). Discussion and conclusion: Based on the results observed in this study, it appears that pulmonary indices in active individuals increased after aerobic test. The increase in this indicator in active people is due to increased volume and elasticity of the lungs as well. In other words, pulmonary index is affected by rib muscles. It is considered that progress over respiratory muscle strength and endurance has raised FEV1 in the active cases.

Keywords: Bronchospasm, aerobic active maximum, pulmonary function, spirometer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093
229 Multi-Objective Optimization of Gas Turbine Power Cycle

Authors: Mohsen Nikaein

Abstract:

Because of importance of energy, optimization of power generation systems is necessary. Gas turbine cycles are suitable manner for fast power generation, but their efficiency is partly low. In order to achieving higher efficiencies, some propositions are preferred such as recovery of heat from exhaust gases in a regenerator, utilization of intercooler in a multistage compressor, steam injection to combustion chamber and etc. However thermodynamic optimization of gas turbine cycle, even with above components, is necessary. In this article multi-objective genetic algorithms are employed for Pareto approach optimization of Regenerative-Intercooling-Gas Turbine (RIGT) cycle. In the multiobjective optimization a number of conflicting objective functions are to be optimized simultaneously. The important objective functions that have been considered for optimization are entropy generation of RIGT cycle (Ns) derives using Exergy Analysis and Gouy-Stodola theorem, thermal efficiency and the net output power of RIGT Cycle. These objectives are usually conflicting with each other. The design variables consist of thermodynamic parameters such as compressor pressure ratio (Rp), excess air in combustion (EA), turbine inlet temperature (TIT) and inlet air temperature (T0). At the first stage single objective optimization has been investigated and the method of Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used for multi-objective optimization. Optimization procedures are performed for two and three objective functions and the results are compared for RIGT Cycle. In order to investigate the optimal thermodynamic behavior of two objectives, different set, each including two objectives of output parameters, are considered individually. For each set Pareto front are depicted. The sets of selected decision variables based on this Pareto front, will cause the best possible combination of corresponding objective functions. There is no superiority for the points on the Pareto front figure, but they are superior to any other point. In the case of three objective optimization the results are given in tables.

Keywords: Exergy, Entropy Generation, Brayton Cycle, DesignParameters, Optimization, Genetic Algorithm, Multi-Objective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2485
228 The Necessity of Optimized Management on Surface Water Sources of Zayanderood Basin

Authors: A. Gandomkar, K. Fouladi

Abstract:

One of the efficient factors in comprehensive development of an area is to provide water sources and on the other hand the appropriate management of them. Population growth and nourishment security for such a population necessitate the achievement of constant development besides the reforming of traditional management in order to increase the profit of sources; In this case, the constant exploitation of sources for the next generations will be considered in this program. The achievement of this development without the consideration and possibility of water development will be too difficult. Zayanderood basin with 41500 areas in square kilometers contains 7 sub-basins and 20 units of hydrologic. In this basin area, from the entire environment descending, just a small part will enter into the river currents and the rest will be out of efficient usage by various ways. The most important surface current of this basin is Zayanderood River with 403 kilometers length which is originated from east slopes of Zagros mount and after draining of this basin area it will enter into Gaavkhooni pond. The existence of various sources and consumptions of water in Zayanderood basin, water transfer of the other basin areas into this basin, of course the contradiction between the upper and lower beneficiaries, the existence of worthwhile natural ecosystems such as Gaavkhooni swamp in this basin area and finally, the drought condition and lack of water in this area all necessitate the existence of comprehensive management of water sources in this central basin area of Iran as this method is a kind of management which considers the development and the management of water sources as an equilibrant way to increase the economical and social benefits. In this study, it is tried to survey the network of surface water sources of basin in upper and lower sections; at the most, according to the difficulties and deficiencies of an efficient management of water sources in this basin area, besides the difficulties of water draining and the destructive phenomenon of flood-water, the appropriate guidelines according to the region conditions are presented in order to prevent the deviation of water in upper sections and development of regions in lower sections of Zayanderood dam.

Keywords: Zayanderood Basin, Efficient Management, Hydrology Climate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564
227 Adverse Curing Conditions and Performance of Concrete: Bangladesh Perspective

Authors: T. Manzur

Abstract:

Concrete is the predominant construction material in Bangladesh. In large projects, stringent quality control procedures are usually followed under the supervision of experienced engineers and skilled labors. However, in the case of small projects and particularly at distant locations from major cities, proper quality control is often an issue. It has been found from experience that such quality related issues mainly arise from inappropriate proportioning of concrete mixes and improper curing conditions. In most cases external curing method is followed which requires supply of adequate quantity of water along with proper protection against evaporation. Often these conditions are found missing in the general construction sites and eventually lead to production of weaker concrete both in terms of strength and durability. In this study, an attempt has been made to investigate the performance of general concreting works of the country when subjected to several adverse curing conditions that are quite common in various small to medium construction sites. A total of six different types of adverse curing conditions were simulated in the laboratory and samples were kept under those conditions for several days. A set of samples was also submerged in normal curing condition having proper supply of curing water. Performance of concrete was evaluated in terms of compressive strength, tensile strength, chloride permeability and drying shrinkage. About 37% and 25% reduction in 28-day compressive and tensile strength were observed respectively, for samples subjected to most adverse curing condition as compared to the samples under normal curing conditions. Normal curing concrete exhibited moderate permeability (close to low permeability) whereas concrete under adverse curing conditions showed very high permeability values. Similar results were also obtained for shrinkage tests. This study, thus, will assist concerned engineers and supervisors to understand the importance of quality assurance during the curing period of concrete.

Keywords: Adverse, concrete, curing, compressive strength, drying shrinkage, permeability, tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018
226 Identification of Flexographic-printed Newspapers with NIR Spectral Imaging

Authors: Raimund Leitner, Susanne Rosskopf

Abstract:

Near-infrared (NIR) spectroscopy is a widely used method for material identification for laboratory and industrial applications. While standard spectrometers only allow measurements at one sampling point at a time, NIR Spectral Imaging techniques can measure, in real-time, both the size and shape of an object as well as identify the material the object is made of. The online classification and sorting of recovered paper with NIR Spectral Imaging (SI) is used with success in the paper recycling industry throughout Europe. Recently, the globalisation of the recycling material streams caused that water-based flexographic-printed newspapers mainly from UK and Italy appear also in central Europe. These flexo-printed newspapers are not sufficiently de-inkable with the standard de-inking process originally developed for offset-printed paper. This de-inking process removes the ink from recovered paper and is the fundamental processing step to produce high-quality paper from recovered paper. Thus, the flexo-printed newspapers are a growing problem for the recycling industry as they reduce the quality of the produced paper if their amount exceeds a certain limit within the recovered paper material. This paper presents the results of a research project for the development of an automated entry inspection system for recovered paper that was jointly conducted by CTR AG (Austria) and PTS Papiertechnische Stiftung (Germany). Within the project an NIR SI prototype for the identification of flexo-printed newspaper has been developed. The prototype can identify and sort out flexoprinted newspapers in real-time and achieves a detection accuracy for flexo-printed newspaper of over 95%. NIR SI, the technology the prototype is based on, allows the development of inspection systems for incoming goods in a paper production facility as well as industrial sorting systems for recovered paper in the recycling industry in the near future.

Keywords: spectral imaging, imaging spectroscopy, NIR, waterbasedflexographic, flexo-printed, recovered paper, real-time classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1506
225 Preparing Data for Calibration of Mechanistic-Empirical Pavement Design Guide in Central Saudi Arabia

Authors: Abdulraaof H. Alqaili, Hamad A. Alsoliman

Abstract:

Through progress in pavement design developments, a pavement design method was developed, which is titled the Mechanistic Empirical Pavement Design Guide (MEPDG). Nowadays, the evolution in roads network and highways is observed in Saudi Arabia as a result of increasing in traffic volume. Therefore, the MEPDG currently is implemented for flexible pavement design by the Saudi Ministry of Transportation. Implementation of MEPDG for local pavement design requires the calibration of distress models under the local conditions (traffic, climate, and materials). This paper aims to prepare data for calibration of MEPDG in Central Saudi Arabia. Thus, the first goal is data collection for the design of flexible pavement from the local conditions of the Riyadh region. Since, the modifying of collected data to input data is needed; the main goal of this paper is the analysis of collected data. The data analysis in this paper includes processing each: Trucks Classification, Traffic Growth Factor, Annual Average Daily Truck Traffic (AADTT), Monthly Adjustment Factors (MAFi), Vehicle Class Distribution (VCD), Truck Hourly Distribution Factors, Axle Load Distribution Factors (ALDF), Number of axle types (single, tandem, and tridem) per truck class, cloud cover percent, and road sections selected for the local calibration. Detailed descriptions of input parameters are explained in this paper, which leads to providing of an approach for successful implementation of MEPDG. Local calibration of MEPDG to the conditions of Riyadh region can be performed based on the findings in this paper.

Keywords: Mechanistic-empirical pavement design guide, traffic characteristics, materials properties, climate, Riyadh.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1171
224 Classification of Extreme Ground-Level Ozone Based on Generalized Extreme Value Model for Air Monitoring Station

Authors: Siti Aisyah Zakaria, Nor Azrita Mohd Amin, Noor Fadhilah Ahmad Radi, Nasrul Hamidin

Abstract:

Higher ground-level ozone (GLO) concentration adversely affects human health, vegetations as well as activities in the ecosystem. In Malaysia, most of the analysis on GLO concentration are carried out using the average value of GLO concentration, which refers to the centre of distribution to make a prediction or estimation. However, analysis which focuses on the higher value or extreme value in GLO concentration is rarely explored. Hence, the objective of this study is to classify the tail behaviour of GLO using generalized extreme value (GEV) distribution estimation the return level using the corresponding modelling (Gumbel, Weibull, and Frechet) of GEV distribution. The results show that Weibull distribution which is also known as short tail distribution and considered as having less extreme behaviour is the best-fitted distribution for four selected air monitoring stations in Peninsular Malaysia, namely Larkin, Pelabuhan Kelang, Shah Alam, and Tanjung Malim; while Gumbel distribution which is considered as a medium tail distribution is the best-fitted distribution for Nilai station. The return level of GLO concentration in Shah Alam station is comparatively higher than other stations. Overall, return levels increase with increasing return periods but the increment depends on the type of the tail of GEV distribution’s tail. We conduct this study by using maximum likelihood estimation (MLE) method to estimate the parameters at four selected stations in Peninsular Malaysia. Next, the validation for the fitted block maxima series to GEV distribution is performed using probability plot, quantile plot and likelihood ratio test. Profile likelihood confidence interval is tested to verify the type of GEV distribution. These results are important as a guide for early notification on future extreme ozone events.

Keywords: Extreme value theory, generalized extreme value distribution, ground-level ozone, return level.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 456
223 Effects of Irrigation Scheduling and Soil Management on Maize (Zea mays L.) Yield in Guinea Savannah Zone of Nigeria

Authors: I. Alhassan, A. M. Saddiq, A. G. Gashua, K. K. Gwio-Kura

Abstract:

The main objective of any irrigation program is the development of an efficient water management system to sustain crop growth and development and avoid physiological water stress in the growing plants. Field experiment to evaluate the effects of some soil moisture conservation practices on yield and water use efficiency (WUE) of maize was carried out in three locations (i.e. Mubi and Yola in the northern Guinea Savannah and Ganye in the southern Guinea Savannah of Adamawa State, Nigeria) during the dry seasons of 2013 and 2014. The experiment consisted of three different irrigation levels (7, 10 and 12 day irrigation intervals), two levels of mulch (mulch and un-mulched) and two tillage practices (no tillage and minimum tillage) arranged in a randomized complete block design with split-split plot arrangement and replicated three times. The Blaney-Criddle method was used for measuring crop evapotranspiration. The results indicated that seven-day irrigation intervals and mulched treatment were found to have significant effect (P>0.05) on grain yield and water use efficiency in all the locations. The main effect of tillage was non-significant (P<0.05) on grain yield and WUE. The interaction effects of irrigation and mulch were significant (P>0.05) on grain yield and WUE at Mubi and Yola. Generally, higher grain yield and WUE were recorded on mulched and seven-day irrigation intervals, whereas lower values were recorded on un-mulched with 12-day irrigation intervals. Tillage exerts little influence on the yield and WUE. Results from Ganye were found to be generally higher than those recorded in Mubi and Yola; it also showed that an irrigation interval of 10 days with mulching could be adopted for the Ganye area, while seven days interval is more appropriate for Mubi and Yola.

Keywords: Irrigation, maize, mulching, tillage, guinea savannah.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1049
222 Wastewater Treatment in Moving-Bed Biofilm Reactor operated by Flow Reversal Intermittent Aeration System

Authors: B. K. Kim, D. Chang, D. J. Son, D. W. Kim, J. K. Choi, H. J. Yeon, C. Y. Yoon, Y. Fan, S. Y. Lim, K. H. Hong

Abstract:

Intermittent aeration process can be easily applied on the existing activated sludge system and is highly reliable against the loading changes. It can be operated in a relatively simple way as well. Since the moving-bed biofilm reactor method processes pollutants by attaching and securing the microorganisms on the media, the process efficiency can be higher compared to the suspended growth biological treatment process, and can reduce the return of sludge. In this study, the existing intermittent aeration process with alternating flow being applied on the oxidation ditch is applied on the continuous flow stirred tank reactor with advantages from both processes, and we would like to develop the process to significantly reduce the return of sludge in the clarifier and to secure the reliable quality of treated water by adding the moving media. Corresponding process has the appropriate form as an infrastructure based on u- environment in future u- City and is expected to accelerate the implementation of u-Eco city in conjunction with city based services. The system being conducted in a laboratory scale has been operated in HRT 8hours except for the final clarifier and showed the removal efficiency of 97.7 %, 73.1 % and 9.4 % in organic matters, TN and TP, respectively with operating range of 4hour cycle on system SRT 10days. After adding the media, the removal efficiency of phosphorus showed a similar level compared to that before the addition, but the removal efficiency of nitrogen was improved by 7~10 %. In addition, the solids which were maintained in MLSS 1200~1400 at 25 % of media packing were attached all onto the media, which produced no sludge entering the clarifier. Therefore, the return of sludge is not needed any longer.

Keywords: Municipal wastewater treatment, Biological nutrient removal, Alternating flow intermittent aeration system, Reversal flow intermittent aeration system, Moving-bed biofilm reactor, CFSTR, u-City, u-Eco city

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2273
221 Shear Capacity of Rectangular Duct Panel Experiencing Internal Pressure

Authors: K. S. Sivakumaran, T. Thanga, B. Halabieh

Abstract:

The end panels of a large rectangular industrial duct, which experience significant internal pressures, also experience considerable transverse shear due to transfer of gravity loads to the supports. The current design practice of such thin plate panels for shear load is based on methods used for the design of plate girder webs. The structural arrangements, the loadings and the resulting behavior associated with the industrial duct end panels are, however, significantly different from those of the web of a plate girder. The large aspect ratio of the end panels gives rise to multiple bands of tension fields, whereas the plate girder web design is based on one tension field. In addition to shear, the industrial end panels are subjected to internal pressure which in turn produces significant membrane action. This paper reports a study which was undertaken to review the current industrial analysis and design methods and to propose a comprehensive method of designing industrial duct end panels for shear resistance. In this investigation, a nonlinear finite element model was developed to simulate the behavior of industrial duct end panel, along with the associated edge stiffeners, subjected to transverse shear and internal pressures. The model considered the geometric imperfections and constitutive relations for steels. Six scale independent dimensionless parameters that govern the behavior of such end panel were identified and were then used in a parametric study. It was concluded that the plate slenderness dominates the shear strength of stockier end panels, and whereas, both the plate slenderness and the aspect ratio influence the shear strength of slender end panels. Based on these studies, this paper proposes design aids for estimating the shear strength of rectangular duct end panels.

Keywords: Thin plate, transverse shear, tension field, finite element analysis, parametric study, design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1926
220 Hyperspectral Imaging and Nonlinear Fukunaga-Koontz Transform Based Food Inspection

Authors: Hamidullah Binol, Abdullah Bal

Abstract:

Nowadays, food safety is a great public concern; therefore, robust and effective techniques are required for detecting the safety situation of goods. Hyperspectral Imaging (HSI) is an attractive material for researchers to inspect food quality and safety estimation such as meat quality assessment, automated poultry carcass inspection, quality evaluation of fish, bruise detection of apples, quality analysis and grading of citrus fruits, bruise detection of strawberry, visualization of sugar distribution of melons, measuring ripening of tomatoes, defect detection of pickling cucumber, and classification of wheat kernels. HSI can be used to concurrently collect large amounts of spatial and spectral data on the objects being observed. This technique yields with exceptional detection skills, which otherwise cannot be achieved with either imaging or spectroscopy alone. This paper presents a nonlinear technique based on kernel Fukunaga-Koontz transform (KFKT) for detection of fat content in ground meat using HSI. The KFKT which is the nonlinear version of FKT is one of the most effective techniques for solving problems involving two-pattern nature. The conventional FKT method has been improved with kernel machines for increasing the nonlinear discrimination ability and capturing higher order of statistics of data. The proposed approach in this paper aims to segment the fat content of the ground meat by regarding the fat as target class which is tried to be separated from the remaining classes (as clutter). We have applied the KFKT on visible and nearinfrared (VNIR) hyperspectral images of ground meat to determine fat percentage. The experimental studies indicate that the proposed technique produces high detection performance for fat ratio in ground meat.

Keywords: Food (Ground meat) inspection, Fukunaga-Koontz transform, hyperspectral imaging, kernel methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1466
219 Ramp Rate and Constriction Factor Based Dual Objective Economic Load Dispatch Using Particle Swarm Optimization

Authors: Himanshu Shekhar Maharana, S. K .Dash

Abstract:

Economic Load Dispatch (ELD) proves to be a vital optimization process in electric power system for allocating generation amongst various units to compute the cost of generation, the cost of emission involving global warming gases like sulphur dioxide, nitrous oxide and carbon monoxide etc. In this dissertation, we emphasize ramp rate constriction factor based particle swarm optimization (RRCPSO) for analyzing various performance objectives, namely cost of generation, cost of emission, and a dual objective function involving both these objectives through the experimental simulated results. A 6-unit 30 bus IEEE test case system has been utilized for simulating the results involving improved weight factor advanced ramp rate limit constraints for optimizing total cost of generation and emission. This method increases the tendency of particles to venture into the solution space to ameliorate their convergence rates. Earlier works through dispersed PSO (DPSO) and constriction factor based PSO (CPSO) give rise to comparatively higher computational time and less good optimal solution at par with current dissertation. This paper deals with ramp rate and constriction factor based well defined ramp rate PSO to compute various objectives namely cost, emission and total objective etc. and compares the result with DPSO and weight improved PSO (WIPSO) techniques illustrating lesser computational time and better optimal solution. 

Keywords: Economic load dispatch, constriction factor based particle swarm optimization, dispersed particle swarm optimization, weight improved particle swarm optimization, ramp rate and constriction factor based particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1218
218 A Hybrid Image Fusion Model for Generating High Spatial-Temporal-Spectral Resolution Data Using OLI-MODIS-Hyperion Satellite Imagery

Authors: Yongquan Zhao, Bo Huang

Abstract:

Spatial, Temporal, and Spectral Resolution (STSR) are three key characteristics of Earth observation satellite sensors; however, any single satellite sensor cannot provide Earth observations with high STSR simultaneously because of the hardware technology limitations of satellite sensors. On the other hand, a conflicting circumstance is that the demand for high STSR has been growing with the remote sensing application development. Although image fusion technology provides a feasible means to overcome the limitations of the current Earth observation data, the current fusion technologies cannot enhance all STSR simultaneously and provide high enough resolution improvement level. This study proposes a Hybrid Spatial-Temporal-Spectral image Fusion Model (HSTSFM) to generate synthetic satellite data with high STSR simultaneously, which blends the high spatial resolution from the panchromatic image of Landsat-8 Operational Land Imager (OLI), the high temporal resolution from the multi-spectral image of Moderate Resolution Imaging Spectroradiometer (MODIS), and the high spectral resolution from the hyper-spectral image of Hyperion to produce high STSR images. The proposed HSTSFM contains three fusion modules: (1) spatial-spectral image fusion; (2) spatial-temporal image fusion; (3) temporal-spectral image fusion. A set of test data with both phenological and land cover type changes in Beijing suburb area, China is adopted to demonstrate the performance of the proposed method. The experimental results indicate that HSTSFM can produce fused image that has good spatial and spectral fidelity to the reference image, which means it has the potential to generate synthetic data to support the studies that require high STSR satellite imagery.

Keywords: Hybrid spatial-temporal-spectral fusion, high resolution synthetic imagery, least square regression, sparse representation, spectral transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1161
217 Micropropagation and in vitro Conservation via Slow Growth Techniques of Prunus webbii (Spach) Vierh: An Endangered Plant Species in Albania

Authors: Valbona Sota, Efigjeni Kongjika

Abstract:

Wild almond is a woody species, which is difficult to propagate either generatively by seed or by vegetative methods (grafting or cuttings) and also considered as Endangered (EN) in Albania based on IUCN criteria. As a wild relative of cultivated fruit trees, this species represents a source of genetic variability and can be very important in breeding programs and cultivation. For this reason, it would be of interest to use an effective method of in vitro mid-term conservation, which involves strategies to slow plant growth through physicochemical alterations of in vitro growth conditions. Multiplication of wild almond was carried out using zygotic embryos, as primary explants, with the purpose to develop a successful propagation protocol. Results showed that zygotic embryos can proliferate through direct or indirect organogenesis. During subculture, stage was obtained a great number of new plantlets identical to mother plants derived from the zygotic embryos. All in vitro plantlets obtained from subcultures underwent in vitro conservation by minimal growth in low temperature (4ºC) and darkness. The efficiency of this technique was evaluated for 3, 6, and 10 months of conservation period. Maintenance in these conditions reduced micro cuttings growth. Survival and regeneration rates for each period were evaluated and resulted that the maximal time of conservation without subculture on 4ºC was 10 months, but survival and regeneration rates were significantly reduced, specifically 15.6% and 7.6%. An optimal period of conservation in these conditions can be considered the 5-6 months storage, which can lead to 60-50% of survival and regeneration rates. This protocol may be beneficial for mass propagation, mid-term conservation, and for genetic manipulation of wild almond.

Keywords: Micropropagation, minimal growth, storage, wild almond.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 728
216 Preferred Character Size for Oblique Angles

Authors: Photjanat Phimnom, Haruetai Lohasiriwat

Abstract:

In today’s world, the LED display has been used for presenting visual information under various circumstances. Such information is an important intermediary in the human information processing. Researchers have been investigated diverse factors that influence this process effectiveness. The letter size is undoubtedly one major factor that has been tested and recommended by many standards and guidelines. However, viewing information on the display from direct perpendicular position is a typical assumption whereas many actual events are required viewing from the angles. This current research aims to study the effect of oblique viewing angle and viewing distance on ability to recognize alphabet, number, and English word. The total of ten participants was volunteered to our 3 x 4 x 4 within subject study. Independent variables include three distance levels (2, 6, and 12 m), four oblique angles (0, 45, 60, 75 degree), and four target types (alphabet, number, short word, and long word). Following the method of constant stimuli our study suggests that the larger oblique angle, ranging from 0 to 75 degree from the line of sight, results in significant higher legibility threshold or larger font size required (p-value < 0.05). Viewing distance factor also shows to have significant effect on the threshold (p-value < 0.05). However, the effect from distance factor is expected to be confounded by the quality of the screen used in our experiment. Lastly, our results show that single alphabet as well as single number are recognized at significant lower threshold (smaller font size) as compared to both short and long words (p-value < 0.05). Therefore, it is recommended that when designs information to be presented on LED display, understanding of all possible ranges of oblique angle should be taken into account in order to specify the preferred letter size. Additionally, the recommendation of letter size for 100% legibility in our tested conditions is provided in the paper.

Keywords: Letter Size, Oblique Angle, Viewing Distance, Legibility Threshold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302
215 Effect of Twelve Weeks Brisk Walking on Blood Pressure, Body Mass Index, and Anthropometric Circumference of Obese Males

Authors: Kaukab Azeem

Abstract:

Introduction: Obesity is a major health risk issue in the present day of life for one and all globally. Obesity is one of the major concerns for public health according to recent increasing trends in obesity-related diseases such as Type 2 diabetes. ( Kazuya, 1994).and hyperlipidemia, (Sakata,1990) .which are more prevalent in Japanese adults with body mass index (BMI) values Z25 kg/m2.( Japanese Ministry of Health and Welfare,1997). The purpose of the study was to assess the effect of twelve weeks of brisk walking on blood pressure and body mass index, anthropometric measurements of obese males. Method: Thirty obese (BMI= above 30) males, aged 18 to 22 years, were selected from King Fahd University of Petroleum & Minerals, Saudi Arabia. The subject-s height (cm) was measured using a stadiometer and body mass (kg) was measured with a electronic weighing machine. BMI was subsequently calculated (kg/m2). The blood pressure was measured with standardized sphygmomanometer in mm of Hg. All the measurements were taken twice before and twice after the experimental period. The pre and post anthropometric measurements of waist and hip circumference were measured with the steel tape in cm. The subjects underwent walking schedule two times in a week for 12 weeks. The 45 minute sessions of brisk walking were undertaken at an average intensity of 65% to 85% of maximum HR (HRmax; calculated as 220-age). Results & Discussion: Statistical findings revealed significant changes from pre test to post test in case of both systolic blood pressure and diastolic blood pressure in the walking group. Results also showed significant decrease in their body mass index and anthropometric measurements i.e. (waist & hip circumference). Conclusion: It was concluded that twelve weeks brisk walking is beneficial for lowering of blood pressure, body mass index, and anthropometric circumference of obese males.

Keywords: Anthropometric, Blood pressure, Body mass index

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3032
214 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories

Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos

Abstract:

Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.

Keywords: Database, forensic genetics, genetic analysis, sample management, software solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1107
213 Removal of Rhodamine B from Aqueous Solution Using Natural Clay by Fixed Bed Column Method

Authors: A. Ghribi, M. Bagane

Abstract:

The discharge of dye in industrial effluents is of great concern because their presence and accumulation have a toxic or carcinogenic effect on living species. The removal of such compounds at such low levels is a difficult problem. The adsorption process is an effective and attractive proposition for the treatment of dye contaminated wastewater. Activated carbon adsorption in fixed beds is a very common technology in the treatment of water and especially in processes of decolouration. However, it is expensive and the powdered one is difficult to be separated from aquatic system when it becomes exhausted or the effluent reaches the maximum allowable discharge level. The regeneration of exhausted activated carbon by chemical and thermal procedure is also expensive and results in loss of the sorbent. The focus of this research was to evaluate the adsorption potential of the raw clay in removing rhodamine B from aqueous solutions using a laboratory fixed-bed column. The continuous sorption process was conducted in this study in order to simulate industrial conditions. The effect of process parameters, such as inlet flow rate, adsorbent bed height, and initial adsorbate concentration on the shape of breakthrough curves was investigated. A glass column with an internal diameter of 1.5 cm and height of 30 cm was used as a fixed-bed column. The pH of feed solution was set at 8.5. Experiments were carried out at different bed heights (5 - 20 cm), influent flow rates (1.6- 8 mL/min) and influent rhodamine B concentrations (20 - 80 mg/L). The obtained results showed that the adsorption capacity increases with the bed depth and the initial concentration and it decreases at higher flow rate. The column regeneration was possible for four adsorption–desorption cycles. The clay column study states the value of the excellent adsorption capacity for the removal of rhodamine B from aqueous solution. Uptake of rhodamine B through a fixed-bed column was dependent on the bed depth, influent rhodamine B concentration, and flow rate.

Keywords: Adsorption, Breakthrough curve, Clay, Fixed bed column, Rhodamine B, Regeneration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
212 Building an Arithmetic Model to Assess Visual Consistency in Townscape

Authors: Dheyaa Hussein, Peter Armstrong

Abstract:

The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.

Keywords: Townscape, Urban Design, Visual Assessment, Visual Consistency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
211 The Dialectic between Effectiveness and Humanity in the Era of Open Knowledge from the Perspective of Pedagogy

Authors: Sophia Ming Lee Wen, Chao-Ching Kuo, Yu-Line Hu, Yu-Lung Ho, Chih-Cheng Huang, Yi-Hwa Lee

Abstract:

Teaching and learning should involve social issues by which effectiveness and humanity is due consideration as a guideline for sharing and co-creating knowledge. A qualitative method was used after a pioneer study to confirm pre-service teachers’ awareness of open knowledge. There are 17 in-service teacher candidates sampling from 181 schools in Taiwan. Two questions are to resolve: a) How did teachers change their educational ideas, in particular, their attitudes to meet the needs of knowledge sharing and co-creativity; and b) How did they acknowledge the necessity of working out an appropriate way between the educational efficiency and the nature of education for high performance management. This interview investigated teachers’ attitude of sharing and co-creating knowledge. The results show two facts in Taiwan: A) Individuals who must be able to express themselves will be capable of taking part in an open learning environment; and B) Teachers must lead the direction to inspire high performance and improve students’ capacity via knowledge sharing and co-creating knowledge, according to the student-centered philosophy. Collected data from interviewing showed that the teachers were well aware of changing their teaching methods and make some improvements to balance the educational efficiency and the nature of education. Almost all teachers acknowledge that ICT is helpful to motivate learning enthusiasm. Further, teaching integrated with ICT saves teachers’ time and energy on teaching preparation and promoting effectiveness. Teachers are willing to co-create knowledge with students, though using information is not easy due to the lack of operating skills of the website and ICT. Some teachers are against to co-create knowledge in the informational background since they hold that is not feasible for there being a knowledge gap between teachers and students. Technology would easily mislead teachers and students to the goal of instrumental rationality, which makes pedagogy dysfunctional and inhumane; however, any high quality of teaching should take a dialectical balance between effectiveness and humanity.

Keywords: Open knowledge, dialect between effectiveness and humanity, pedagogy, critical thinking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1344