Search results for: Porous structure particle; Carbon nanoparticles; Catalyst; Spray-drying method.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11258

Search results for: Porous structure particle; Carbon nanoparticles; Catalyst; Spray-drying method.

218 Urbanization and Income Inequality in Thailand

Authors: Acumsiri Tantiakrnpanit

Abstract:

This paper aims to examine the relationship between urbanization and income inequality in Thailand during the period 2002–2020, using a panel of data for 76 provinces collected from Thailand’s National Statistical Office (Labor Force Survey: LFS), as well as geospatial data from the U.S. Air Force Defense Meteorological Satellite Program (DMSP) and the Visible Infrared Imaging Radiometer Suite Day/Night band (VIIRS-DNB) satellite for 19 selected years. This paper employs two different definitions to identify urban areas: 1) Urban areas defined by Thailand's National Statistical Office (LFS), and 2) Urban areas estimated using nighttime light data from the DMSP and VIIRS-DNB satellite. The second method includes two sub-categories: 2.1) Determining urban areas by calculating nighttime light density with a population density of 300 people per square kilometer, and 2.2) Calculating urban areas based on nighttime light density corresponding to a population density of 1,500 people per square kilometer. The empirical analysis based on Ordinary Least Squares (OLS), fixed effects, and random effects models reveals a consistent U-shaped relationship between income inequality and urbanization. The findings from the econometric analysis demonstrate that urbanization or population density has a significant and negative impact on income inequality. Moreover, the square of urbanization shows a statistically significant positive impact on income inequality. Additionally, there is a negative association between logarithmically transformed income and income inequality. This paper also proposes the inclusion of satellite imagery, geospatial data, and spatial econometric techniques in future studies to conduct quantitative analysis of spatial relationships.

Keywords: Income inequality, nighttime light, population density, Thailand, urbanization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 126
217 A Closed-Loop Design Model for Sustainable Manufacturing by Integrating Forward Design and Reverse Design

Authors: Yuan-Jye Tseng, Yi-Shiuan Chen

Abstract:

In this paper, a new concept of closed-loop design for a product is presented. The closed-loop design model is developed by integrating forward design and reverse design. Based on this new concept, a closed-loop design model for sustainable manufacturing by integrated evaluation of forward design, reverse design, and green manufacturing using a fuzzy analytic network process is developed. In the design stage of a product, with a given product requirement and objective, there can be different ways to design the detailed components and specifications. Therefore, there can be different design cases to achieve the same product requirement and objective. Subsequently, in the design evaluation stage, it is required to analyze and evaluate the different design cases. The purpose of this research is to develop a model for evaluating the design cases by integrated evaluating the criteria in forward design, reverse design, and green manufacturing. A fuzzy analytic network process method is presented for integrated evaluation of the criteria in the three models. The comparison matrices for evaluating the criteria in the three groups are established. The total relational values among the three groups represent the total relational effects. In applications, a super matrix model is created and the total relational values can be used to evaluate the design cases for decision-making to select the final design case. An example product is demonstrated in this presentation. It shows that the model is useful for integrated evaluation of forward design, reverse design, and green manufacturing to achieve a closed-loop design for sustainable manufacturing objective.

Keywords: Design evaluation, forward design, reverse design, closed-loop design, supply chain management, closed-loop supply chain, fuzzy analytic network process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
216 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: linked open data, information integration, digital libraries, data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 728
215 Oil-Water Two-Phase Flow Characteristics in Horizontal Pipeline – A Comprehensive CFD Study

Authors: Anand B. Desamala, Ashok Kumar Dasamahapatra, Tapas K. Mandal

Abstract:

In the present work, detailed analysis on flow characteristics of a pair of immiscible liquids through horizontal pipeline is simulated by using ANSYS FLUENT 6.2. Moderately viscous oil and water (viscosity ratio = 107, density ratio = 0.89 and interfacial tension = 0.024 N/m) have been taken as system fluids for the study. Volume of Fluid (VOF) method has been employed by assuming unsteady flow, immiscible liquid pair, constant liquid properties, and co-axial flow. Meshing has been done using GAMBIT. Quadrilateral mesh type has been chosen to account for the surface tension effect more accurately. From the grid independent study, we have selected 47037 number of mesh elements for the entire geometry. Simulation successfully predicts slug, stratified wavy, stratified mixed and annular flow, except dispersion of oil in water, and dispersion of water in oil. Simulation results are validated with horizontal literature data and good conformity is observed. Subsequently, we have simulated the hydrodynamics (viz., velocity profile, area average pressure across a cross section and volume fraction profile along the radius) of stratified wavy and annular flow at different phase velocities. The simulation results show that in the annular flow, total pressure of the mixture decreases with increase in oil velocity due to the fact that pipe cross section is completely wetted with water. Simulated oil volume fraction shows maximum at the centre in core annular flow, whereas, in stratified flow, maximum value appears at upper side of the pipeline. These results are in accord with the actual flow configuration. Our findings could be useful in designing pipeline for transportation of crude oil.

Keywords: CFD, Horizontal pipeline, Oil-water flow, VOF technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5709
214 Applying Resilience Engineering to improve Safety Management in a Construction Site: Design and Validation of a Questionnaire

Authors: M. C. Pardo-Ferreira, J. C. Rubio-Romero, M. Martínez-Rojas

Abstract:

Resilience Engineering is a new paradigm of safety management that proposes to change the way of managing the safety to focus on the things that go well instead of the things that go wrong. Many complex and high-risk sectors such as air traffic control, health care, nuclear power plants, railways or emergencies, have applied this new vision of safety and have obtained very positive results. In the construction sector, safety management continues to be a problem as indicated by the statistics of occupational injuries worldwide. Therefore, it is important to improve safety management in this sector. For this reason, it is proposed to apply Resilience Engineering to the construction sector. The Construction Phase Health and Safety Plan emerges as a key element for the planning of safety management. One of the key tools of Resilience Engineering is the Resilience Assessment Grid that allows measuring the four essential abilities (respond, monitor, learn and anticipate) for resilient performance. The purpose of this paper is to develop a questionnaire based on the Resilience Assessment Grid, specifically on the ability to learn, to assess whether a Construction Phase Health and Safety Plans helps companies in a construction site to implement this ability. The research process was divided into four stages: (i) initial design of a questionnaire, (ii) validation of the content of the questionnaire, (iii) redesign of the questionnaire and (iii) application of the Delphi method. The questionnaire obtained could be used as a tool to help construction companies to evolve from Safety-I to Safety-II. In this way, companies could begin to develop the ability to learn, which will serve as a basis for the development of the other abilities necessary for resilient performance. The following steps in this research are intended to develop other questions that allow evaluating the rest of abilities for resilient performance such as monitoring, learning and anticipating.

Keywords: Resilience engineering, construction sector, resilience assessment grid, construction phase health and safety plan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1001
213 Comparative Efficacy of Pomegranate Juice, Peel and Seed Extract in the Stabilization of Corn Oil under Accelerated Conditions

Authors: Zoi Konsoula

Abstract:

Antioxidant-rich extracts were prepared from pomegranate peels, seeds and juice using methanol and ethanol and their antioxidant activity was evaluated by the 1,1-diphenyl-2-picrylhydrazine (DPPH) radical scavenging and Ferric Reducing Antioxidant Power (FRAP) method. Both analytical methods indicated a higher antioxidant activity in extracts prepared from peels, which was comparable to that of butylated hydroxytoluene (BHT). Furthermore, the antioxidant activity was correlated to the phenolic and flavonoid content of the various extracts. The antioxidant effectiveness of the extracts was also assessed using corn oil as the oxidation substrate. More specifically, preheated corn oil samples stabilized with extracts at a concentration of 250 ppm, 500 ppm or 1,000 ppm were subjected to accelerated aging (100 oC, 10 days) and the extent of oxidative alteration was followed by the measurement of the peroxide, conjugated dienes and trienes, as well as p-aniside value. BHT at its legal limit (200 ppm) served as standard besides the control sample. Results from the different parameters were in agreement with each other suggesting that pomegranate extracts can stabilize corn oil effectively under accelerated conditions, at all concentrations tested. However, the magnitude of oil stabilization depended strongly on the amount of extract added and this was positively correlated with their phenolic content. Pomegranate peel extracts, which exhibited the highest not only phenolic and flavonoid content but also antioxidant activity, were more potent in inhibiting oxidative deterioration. Both methanolic and ethanolic peel extracts at a concentration of 500 ppm exerted a stabilizing effect comparable to that of BHT, while at a concentration of 1000 ppm they exhibited higher stabilization efficiency in comparison to BHT. Finally, heating oil samples resulted in a time dependent decrease in their antioxidant capacity. Samples containing peel extracts appeared to retain their antioxidant capacity for a longer period, indicating that these extracts contained active compounds that offered superior antioxidant protection to corn oil.

Keywords: Antioxidant activity, corn oil, oxidative deterioration, pomegranate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1956
212 Cluster Analysis of Retailers’ Benefits from Their Cooperation with Manufacturers: Business Models Perspective

Authors: M. K. Witek-Hajduk, T. M. Napiórkowski

Abstract:

A number of studies discussed the topic of benefits of retailers-manufacturers cooperation and coopetition. However, there are only few publications focused on the benefits of cooperation and coopetition between retailers and their suppliers of durable consumer goods; especially in the context of business model of cooperating partners. This paper aims to provide a clustering approach to segment retailers selling consumer durables according to the benefits they obtain from their cooperation with key manufacturers and differentiate the said retailers’ in term of the business models of cooperating partners. For the purpose of the study, a survey (with a CATI method) collected data on 603 consumer durables retailers present on the Polish market. Retailers are clustered both, with hierarchical and non-hierarchical methods. Five distinctive groups of consumer durables’ retailers are (based on the studied benefits) identified using the two-stage clustering approach. The clusters are then characterized with a set of exogenous variables, key of which are business models employed by the retailer and its partnering key manufacturer. The paper finds that the a combination of a medium sized retailer classified as an Integrator with a chiefly domestic capital and a manufacturer categorized as a Market Player will yield the highest benefits. On the other side of the spectrum is medium sized Distributor retailer with solely domestic capital – in this case, the business model of the cooperating manufactrer appears to be irreleveant. This paper is the one of the first empirical study using cluster analysis on primary data that defines the types of cooperation between consumer durables’ retailers and manufacturers – their key suppliers. The analysis integrates a perspective of both retailers’ and manufacturers’ business models and matches them with individual and joint benefits.

Keywords: Business model, cooperation, cluster analysis, retailer-manufacturer relationships.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1120
211 A Retrospective Drug Utilization Study of Antiplatelet Drugs in Patients with Ischemic Heart Disease

Authors: K. Jyothi, T. S. Mohamed Saleem, L. Vineela, C. Gopinath, K. B. Yadavender Reddy

Abstract:

Objective: Acute coronary syndrome is a clinical condition encompassing ST segments elevation myocardial infraction, Non ST segment is elevation myocardial infraction and un stable angina is characterized by ruptured coronary plaque, stress and myocardial injury. Angina pectoris is a pressure like pain in the chest that is induced by exertion or stress and relived with in the minute after cessation of effort or using sublingual nitroglycerin. The present research was undertaken to study the drug utilization pattern of antiplatelet drugs for the ischemic heart disease in a tertiary care hospital. Method: The present study is retrospective drug utilization study and study period is 6months. The data is collected from the discharge case sheet of general medicine department from medical department Rajiv Gandhi institute of medical sciences, Kadapa. The tentative sample size fixed was 250 patients. Out of 250 cases 19 cases was excluded because of unrelated data. Results: A total of 250 prescriptions were collected for the study according to the inclusion criteria 233 prescriptions were diagnosed with ischemic heart disease 17 prescriptions were excluded due to unrelated information. out of 233 prescriptions 128 are male (54.9%) and 105 patients are were female (45%). According to the gender distribution, the prevalence of ischemic heart disease in males are 90 (70.31%) and females are 39 (37.1%). In the same way the prevalence of ischemic heart disease along with cerebrovascular disease in males are 39 (29.6%) and females are 66 (62.6%). Conclusion: We found that 94.8% of drug utilization of antiplatelet drugs was achieved in the Rajiv Gandhi institute of medical sciences, Kadapa from 2011-2012.

Keywords: Angina pectoris, aspirin, clopidogrel, myocardial infarction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008
210 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique

Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki

Abstract:

Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.

Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1032
209 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: Big data, bus headway prediction, machine learning, public transportation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560
208 A Damage Level Assessment Model for Extra High Voltage Transmission Towers

Authors: Huan-Chieh Chiu, Hung-Shuo Wu, Chien-Hao Wang, Yu-Cheng Yang, Ching-Ya Tseng, Joe-Air Jiang

Abstract:

Power failure resulting from tower collapse due to violent seismic events might bring enormous and inestimable losses. The Chi-Chi earthquake, for example, strongly struck Taiwan and caused huge damage to the power system on September 21, 1999. Nearly 10% of extra high voltage (EHV) transmission towers were damaged in the earthquake. Therefore, seismic hazards of EHV transmission towers should be monitored and evaluated. The ultimate goal of this study is to establish a damage level assessment model for EHV transmission towers. The data of earthquakes provided by Taiwan Central Weather Bureau serve as a reference and then lay the foundation for earthquake simulations and analyses afterward. Some parameters related to the damage level of each point of an EHV tower are simulated and analyzed by the data from monitoring stations once an earthquake occurs. Through the Fourier transform, the seismic wave is then analyzed and transformed into different wave frequencies, and the data would be shown through a response spectrum. With this method, the seismic frequency which damages EHV towers the most is clearly identified. An estimation model is built to determine the damage level caused by a future seismic event. Finally, instead of relying on visual observation done by inspectors, the proposed model can provide a power company with the damage information of a transmission tower. Using the model, manpower required by visual observation can be reduced, and the accuracy of the damage level estimation can be substantially improved. Such a model is greatly useful for health and construction monitoring because of the advantages of long-term evaluation of structural characteristics and long-term damage detection.

Keywords: Smart grid, EHV transmission tower, response spectrum, damage level monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1065
207 Microalbuminuria in Essential Hypertension

Authors: Sharan Badiger, Prema T. Akkasaligar, Sandeep HM, Biradar MS

Abstract:

Essential hypertension (HTN) usually clusters with other cardiovascular risk factors such as age, overweight, diabetes, insulin resistance and dyslipidemia. The target organ damage (TOD) such as left ventricular hypertrophy, microalbuminuria (MA), acute coronary syndrome (ACS), stroke and cognitive dysfunction takes place early in course of hypertension. Though the prevalence of hypertension is high in India, the relationship between microalbuminuria and target organ damage in hypertension is not well studied. This study aim at detecting MA in essential hypertension and its relation to severity of HTN, duration of HTN, body mass index (BMI), age and TOD such as HTN retinopathy and acute coronary syndrome The present study was done in 100 patients of essential hypertension non diabetics admitted to B.L.D.E.University-s Sri B.M.Patil Medical College, Bijapur, from October 2008 to April 2011. The patients underwent detailed history and clinical examination. Early morning 5 ml of urine sample was collected & MA was estimated by immunoturbidometry method. The relationship of MA with the duration & severity of HTN, BMI, age, sex and TOD's like hypertensive retinopathy, ACS was assessed by univariate analysis. The prevalence of MA in this study was found to be 63 %. In that 42% were male & 21% were female. In this study a significant association between MA and the duration of hypertension (p = 0.036) & (OR =0.438). Longer the duration of hypertension, more possibility of microalbumin in urine. Also there was a significant association between severity of hypertension and MA (p=0.045) and (OR=0.093). MA was positive in 50 (79.4%) patients out of 63, whose blood pressure was >160/100 mm Hg. In this study a significant association between MA and the grades of hypertensive retinopathy (p =0.011) and acute coronary syndrome (p = 0.041) (OR =2.805). Gender and BMI did not pose high risk for MA in this study.The prevalence of MA in essential hypertension is high in this part of the community and MA will increase the risk of developing target organ damage.Early screening of patients with essential hypertension for MA and aggressive management of positive cases might reduce the burden of chronic kidney diseases and cardiovascular diseases in the community.

Keywords: Acute coronary syndrome, Essential hypertension, Microalbuminuria, Target organ damage

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2388
206 Precipitation Intensity: Duration Based Threshold Analysis for Initiation of Landslides in Upper Alaknanda Valley

Authors: Soumiya Bhattacharjee, P. K. Champati Ray, Shovan L. Chattoraj, Mrinmoy Dhara

Abstract:

The entire Himalayan range is globally renowned for rainfall-induced landslides. The prime focus of the study is to determine rainfall based threshold for initiation of landslides that can be used as an important component of an early warning system for alerting stake holders. This research deals with temporal dimension of slope failures due to extreme rainfall events along the National Highway-58 from Karanprayag to Badrinath in the Garhwal Himalaya, India. Post processed 3-hourly rainfall intensity data and its corresponding duration from daily rainfall data available from Tropical Rainfall Measuring Mission (TRMM) were used as the prime source of rainfall data. Landslide event records from Border Road Organization (BRO) and some ancillary landslide inventory data for 2013 and 2014 have been used to determine Intensity Duration (ID) based rainfall threshold. The derived governing threshold equation, I= 4.738D-0.025, has been considered for prediction of landslides of the study region. This equation was validated with an accuracy of 70% landslides during August and September 2014. The derived equation was considered for further prediction of landslides of the study region. From the obtained results and validation, it can be inferred that this equation can be used for initiation of landslides in the study area to work as a part of an early warning system. Results can significantly improve with ground based rainfall estimates and better database on landslide records. Thus, the study has demonstrated a very low cost method to get first-hand information on possibility of impending landslide in any region, thereby providing alert and better preparedness for landslide disaster mitigation.

Keywords: Landslide, intensity-duration, rainfall threshold, Tropical Rainfall Measuring Mission, slope, inventory, early warning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1236
205 Prioritization Assessment of Housing Development Risk Factors: A Fuzzy Hierarchical Process-Based Approach

Authors: Yusuf Garba Baba

Abstract:

The construction industry and housing subsector are fraught with risks that have the potential of negatively impacting on the achievement of project objectives. The success or otherwise of most construction projects depends to large extent on how well these risks have been managed. The recent paradigm shift by the subsector to use of formal risk management approach in contrast to hitherto developed rules of thumb means that risks must not only be identified but also properly assessed and responded to in a systematic manner. The study focused on identifying risks associated with housing development projects and prioritisation assessment of the identified risks in order to provide basis for informed decision. The study used a three-step identification framework: review of literature for similar projects, expert consultation and questionnaire based survey to identify potential risk factors. Delphi survey method was employed in carrying out the relative prioritization assessment of the risks factors using computer-based Analytical Hierarchical Process (AHP) software. The results show that 19 out of the 50 risks significantly impact on housing development projects. The study concludes that although significant numbers of risk factors have been identified as having relevance and impacting to housing construction projects, economic risk group and, in particular, ‘changes in demand for houses’ is prioritised by most developers as posing a threat to the achievement of their housing development objectives. Unless these risks are carefully managed, their effects will continue to impede success in these projects. The study recommends the adoption and use of the combination of multi-technique identification framework and AHP prioritization assessment methodology as a suitable model for the assessment of risks in housing development projects.

Keywords: Risk identification, risk assessment, analytical hierarchical process, multi-criteria decision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 732
204 Validation on 3D Surface Roughness Algorithm for Measuring Roughness of Psoriasis Lesion

Authors: M.H. Ahmad Fadzil, Esa Prakasa, Hurriyatul Fitriyah, Hermawan Nugroho, Azura Mohd Affandi, S.H. Hussein

Abstract:

Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.

Keywords: psoriasis, roughness algorithm, polynomial surfacefitting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2490
203 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 419
202 Modeling a Multinomial Logit Model of Intercity Travel Mode Choice Behavior for All Trips in Libya

Authors: Manssour A. Abdulsalam Bin Miskeen, Ahmed Mohamed Alhodairi, Riza Atiq Abdullah Bin O. K. Rahmat

Abstract:

In the planning point of view, it is essential to have mode choice, due to the massive amount of incurred in transportation systems. The intercity travellers in Libya have distinct features, as against travellers from other countries, which includes cultural and socioeconomic factors. Consequently, the goal of this study is to recognize the behavior of intercity travel using disaggregate models, for projecting the demand of nation-level intercity travel in Libya. Multinomial Logit Model for all the intercity trips has been formulated to examine the national-level intercity transportation in Libya. The Multinomial logit model was calibrated using nationwide revealed preferences (RP) and stated preferences (SP) survey. The model was developed for deference purpose of intercity trips (work, social and recreational). The variables of the model have been predicted based on maximum likelihood method. The data needed for model development were obtained from all major intercity corridors in Libya. The final sample size consisted of 1300 interviews. About two-thirds of these data were used for model calibration, and the remaining parts were used for model validation. This study, which is the first of its kind in Libya, investigates the intercity traveler’s mode-choice behavior. The intercity travel mode-choice model was successfully calibrated and validated. The outcomes indicate that, the overall model is effective and yields higher precision of estimation. The proposed model is beneficial, due to the fact that, it is receptive to a lot of variables, and can be employed to determine the impact of modifications in the numerous characteristics on the need for various travel modes. Estimations of the model might also be of valuable to planners, who can estimate possibilities for various modes and determine the impact of unique policy modifications on the need for intercity travel.

Keywords: Multinomial logit model, improved intercity transport, intercity mode-choice behavior, disaggregate analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7866
201 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru

Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar

Abstract:

Nowadays, Heritage Building Information Modeling (HBIM) is considered an efficient tool to represent and manage information of Cultural Heritage (CH). The basis of this tool relies on a 3D model generally obtained from a Cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired Level of Development (LOD), Level of Information (LOI), Grade of Generation (GOG) as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models’ families respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources, since the BIM software used has a free student license.

Keywords: Cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 925
200 Stress Analysis of Hexagonal Element for Precast Concrete Pavements

Authors: J. Novak, A. Kohoutkova, V. Kristek, J. Vodicka, M. Sramek

Abstract:

While the use of cast-in-place concrete for an airfield and highway pavement overlay is very common, the application of precast concrete elements is very limited today. The main reasons consist of high production costs and complex structural behavior. Despite that, several precast concrete systems have been developed and tested with the aim to provide a system with rapid construction. The contribution deals with the reinforcement design of a hexagonal element developed for a proposed airfield pavement system. The sub-base course of the system is composed of compacted recycled concrete aggregates and fiber reinforced concrete with recycled aggregates place on top of it. The selected element belongs to a group of precast concrete elements which are being considered for the construction of a surface course. Both high costs of full-scale experiments and the need to investigate various elements force to simulate their behavior in a numerical analysis software by using finite element method instead of performing expensive experiments. The simulation of the selected element was conducted on a nonlinear model in order to obtain such results which could fully compensate results from experiments. The main objective was to design reinforcement of the precast concrete element subject to quasi-static loading from airplanes with respect to geometrical imperfections, manufacturing imperfections, tensile stress in reinforcement, compressive stress in concrete and crack width. The obtained findings demonstrate that the position and the presence of imperfection in a pavement highly affect the stress distribution in the precast concrete element. The precast concrete element should be heavily reinforced to fulfill all the demands. Using under-reinforced concrete elements would lead to the formation of wide cracks and cracks permanently open.

Keywords: Imperfection, numerical simulation, pavement, precast concrete element, reinforcement design, stress analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 759
199 21st Century Biotechnological Research and Development Advancements for Industrial Development in India

Authors: Monisha Isaac

Abstract:

Biotechnology is a discipline which explains the use of living organisms and systems to construct a product, or we can define it as an application or technology developed to use biological systems and organisms processes for a specific use. Particularly, it includes cells and its components use for new technologies and inventions. The tools developed can be further used in diverse fields such as agriculture, industry, research and hospitals etc. The 21st century has seen a drastic development and advancement in biotechnology in India. Significant increase in Government of India’s outlays for biotechnology over the past decade has been observed. A sectoral break up of biotechnology-based companies in India shows that most of the companies are agriculture-based companies having interests ranging from tissue culture to biopesticides. Major attention has been given by the companies in health related activities and in environmental biotechnology. The biopharmaceutical, which comprises of vaccines, diagnostic, and recombinant products is the most reliable and largest segment of the Indian Biotech industry. India has developed its vaccine markets and supplies them to various countries. Then there are the bio-services, which mainly comprise of contract researches and manufacturing services. India has made noticeable developments in the field of bio industries including manufacturing of enzymes, biofuels and biopolymers. Biotechnology is also playing a crucial and significant role in the field of agriculture. Traditional methods have been replaced by new technologies that mainly focus on GM crops, marker assisted technologies and the use of biotechnological tools to improve the quality of fertilizers and soil. It may only be a small contributor but has shown to have huge potential for growth. Bioinformatics is a computational method which helps to store, manage, arrange and design tools to interpret the extensive data gathered through experimental trials, making it important in the design of drugs.

Keywords: Biotechnology, advancement, agriculture, bio-services, bio-industries, bio-pharmaceuticals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2106
198 Bronchospasm Analysis Following the Implementation of a Program of Maximum Aerobic Exercise in Active Men

Authors: Sajjad Shojaeidoust, Mohsen Ghanbarzadeh, Abdolhamid Habibi

Abstract:

Exercise-induced bronchospasm (EIB) is a transitory condition of airflow obstruction that is associated with physical activities. It is noted that high ventilation can lead to an increase in the heat and reduce in the moisture in airways resistance of trachea. Also causes of pathophysiological mechanism are EIB. Accordingly, studying some parameters of pulmonary function (FVC, FEV1) among active people seems quintessential. The aim of this study was to analyze bronchospasm following the implementation of a program of maximum aerobic exercise in active men at Chamran University of Ahwaz. Method: In this quasi-experimental study, the population consisted of all students at Chamran University. Among from 55 participants, of which, 15 were randomly selected as the experimental group. In this study, the size of the maximum oxygen consumption was initially measured, and then, based on the maximum oxygen consumed, the active individuals were identified. After five minutes’ warm-up, Strand treadmill exercise test was taken (one session) and pulmonary parameters were measured at both pre- and post-tests (spirometer). After data normalization using KS and non-normality of the data, the Wilcoxon test was used to analyze the data. The significance level for all statistical surveys was considered p≤0/05. Results: The results showed that the ventilation factors and bronchospasm (FVC, FEV1) in the pre-test and post-test resulted in no significant difference among the active people (p≥0/05). Discussion and conclusion: Based on the results observed in this study, it appears that pulmonary indices in active individuals increased after aerobic test. The increase in this indicator in active people is due to increased volume and elasticity of the lungs as well. In other words, pulmonary index is affected by rib muscles. It is considered that progress over respiratory muscle strength and endurance has raised FEV1 in the active cases.

Keywords: Bronchospasm, aerobic active maximum, pulmonary function, spirometer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1136
197 Multi-Objective Optimization of Gas Turbine Power Cycle

Authors: Mohsen Nikaein

Abstract:

Because of importance of energy, optimization of power generation systems is necessary. Gas turbine cycles are suitable manner for fast power generation, but their efficiency is partly low. In order to achieving higher efficiencies, some propositions are preferred such as recovery of heat from exhaust gases in a regenerator, utilization of intercooler in a multistage compressor, steam injection to combustion chamber and etc. However thermodynamic optimization of gas turbine cycle, even with above components, is necessary. In this article multi-objective genetic algorithms are employed for Pareto approach optimization of Regenerative-Intercooling-Gas Turbine (RIGT) cycle. In the multiobjective optimization a number of conflicting objective functions are to be optimized simultaneously. The important objective functions that have been considered for optimization are entropy generation of RIGT cycle (Ns) derives using Exergy Analysis and Gouy-Stodola theorem, thermal efficiency and the net output power of RIGT Cycle. These objectives are usually conflicting with each other. The design variables consist of thermodynamic parameters such as compressor pressure ratio (Rp), excess air in combustion (EA), turbine inlet temperature (TIT) and inlet air temperature (T0). At the first stage single objective optimization has been investigated and the method of Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used for multi-objective optimization. Optimization procedures are performed for two and three objective functions and the results are compared for RIGT Cycle. In order to investigate the optimal thermodynamic behavior of two objectives, different set, each including two objectives of output parameters, are considered individually. For each set Pareto front are depicted. The sets of selected decision variables based on this Pareto front, will cause the best possible combination of corresponding objective functions. There is no superiority for the points on the Pareto front figure, but they are superior to any other point. In the case of three objective optimization the results are given in tables.

Keywords: Exergy, Entropy Generation, Brayton Cycle, DesignParameters, Optimization, Genetic Algorithm, Multi-Objective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2522
196 The Necessity of Optimized Management on Surface Water Sources of Zayanderood Basin

Authors: A. Gandomkar, K. Fouladi

Abstract:

One of the efficient factors in comprehensive development of an area is to provide water sources and on the other hand the appropriate management of them. Population growth and nourishment security for such a population necessitate the achievement of constant development besides the reforming of traditional management in order to increase the profit of sources; In this case, the constant exploitation of sources for the next generations will be considered in this program. The achievement of this development without the consideration and possibility of water development will be too difficult. Zayanderood basin with 41500 areas in square kilometers contains 7 sub-basins and 20 units of hydrologic. In this basin area, from the entire environment descending, just a small part will enter into the river currents and the rest will be out of efficient usage by various ways. The most important surface current of this basin is Zayanderood River with 403 kilometers length which is originated from east slopes of Zagros mount and after draining of this basin area it will enter into Gaavkhooni pond. The existence of various sources and consumptions of water in Zayanderood basin, water transfer of the other basin areas into this basin, of course the contradiction between the upper and lower beneficiaries, the existence of worthwhile natural ecosystems such as Gaavkhooni swamp in this basin area and finally, the drought condition and lack of water in this area all necessitate the existence of comprehensive management of water sources in this central basin area of Iran as this method is a kind of management which considers the development and the management of water sources as an equilibrant way to increase the economical and social benefits. In this study, it is tried to survey the network of surface water sources of basin in upper and lower sections; at the most, according to the difficulties and deficiencies of an efficient management of water sources in this basin area, besides the difficulties of water draining and the destructive phenomenon of flood-water, the appropriate guidelines according to the region conditions are presented in order to prevent the deviation of water in upper sections and development of regions in lower sections of Zayanderood dam.

Keywords: Zayanderood Basin, Efficient Management, Hydrology Climate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
195 Adverse Curing Conditions and Performance of Concrete: Bangladesh Perspective

Authors: T. Manzur

Abstract:

Concrete is the predominant construction material in Bangladesh. In large projects, stringent quality control procedures are usually followed under the supervision of experienced engineers and skilled labors. However, in the case of small projects and particularly at distant locations from major cities, proper quality control is often an issue. It has been found from experience that such quality related issues mainly arise from inappropriate proportioning of concrete mixes and improper curing conditions. In most cases external curing method is followed which requires supply of adequate quantity of water along with proper protection against evaporation. Often these conditions are found missing in the general construction sites and eventually lead to production of weaker concrete both in terms of strength and durability. In this study, an attempt has been made to investigate the performance of general concreting works of the country when subjected to several adverse curing conditions that are quite common in various small to medium construction sites. A total of six different types of adverse curing conditions were simulated in the laboratory and samples were kept under those conditions for several days. A set of samples was also submerged in normal curing condition having proper supply of curing water. Performance of concrete was evaluated in terms of compressive strength, tensile strength, chloride permeability and drying shrinkage. About 37% and 25% reduction in 28-day compressive and tensile strength were observed respectively, for samples subjected to most adverse curing condition as compared to the samples under normal curing conditions. Normal curing concrete exhibited moderate permeability (close to low permeability) whereas concrete under adverse curing conditions showed very high permeability values. Similar results were also obtained for shrinkage tests. This study, thus, will assist concerned engineers and supervisors to understand the importance of quality assurance during the curing period of concrete.

Keywords: Adverse, concrete, curing, compressive strength, drying shrinkage, permeability, tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1060
194 Identification of Flexographic-printed Newspapers with NIR Spectral Imaging

Authors: Raimund Leitner, Susanne Rosskopf

Abstract:

Near-infrared (NIR) spectroscopy is a widely used method for material identification for laboratory and industrial applications. While standard spectrometers only allow measurements at one sampling point at a time, NIR Spectral Imaging techniques can measure, in real-time, both the size and shape of an object as well as identify the material the object is made of. The online classification and sorting of recovered paper with NIR Spectral Imaging (SI) is used with success in the paper recycling industry throughout Europe. Recently, the globalisation of the recycling material streams caused that water-based flexographic-printed newspapers mainly from UK and Italy appear also in central Europe. These flexo-printed newspapers are not sufficiently de-inkable with the standard de-inking process originally developed for offset-printed paper. This de-inking process removes the ink from recovered paper and is the fundamental processing step to produce high-quality paper from recovered paper. Thus, the flexo-printed newspapers are a growing problem for the recycling industry as they reduce the quality of the produced paper if their amount exceeds a certain limit within the recovered paper material. This paper presents the results of a research project for the development of an automated entry inspection system for recovered paper that was jointly conducted by CTR AG (Austria) and PTS Papiertechnische Stiftung (Germany). Within the project an NIR SI prototype for the identification of flexo-printed newspaper has been developed. The prototype can identify and sort out flexoprinted newspapers in real-time and achieves a detection accuracy for flexo-printed newspaper of over 95%. NIR SI, the technology the prototype is based on, allows the development of inspection systems for incoming goods in a paper production facility as well as industrial sorting systems for recovered paper in the recycling industry in the near future.

Keywords: spectral imaging, imaging spectroscopy, NIR, waterbasedflexographic, flexo-printed, recovered paper, real-time classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544
193 Preparing Data for Calibration of Mechanistic-Empirical Pavement Design Guide in Central Saudi Arabia

Authors: Abdulraaof H. Alqaili, Hamad A. Alsoliman

Abstract:

Through progress in pavement design developments, a pavement design method was developed, which is titled the Mechanistic Empirical Pavement Design Guide (MEPDG). Nowadays, the evolution in roads network and highways is observed in Saudi Arabia as a result of increasing in traffic volume. Therefore, the MEPDG currently is implemented for flexible pavement design by the Saudi Ministry of Transportation. Implementation of MEPDG for local pavement design requires the calibration of distress models under the local conditions (traffic, climate, and materials). This paper aims to prepare data for calibration of MEPDG in Central Saudi Arabia. Thus, the first goal is data collection for the design of flexible pavement from the local conditions of the Riyadh region. Since, the modifying of collected data to input data is needed; the main goal of this paper is the analysis of collected data. The data analysis in this paper includes processing each: Trucks Classification, Traffic Growth Factor, Annual Average Daily Truck Traffic (AADTT), Monthly Adjustment Factors (MAFi), Vehicle Class Distribution (VCD), Truck Hourly Distribution Factors, Axle Load Distribution Factors (ALDF), Number of axle types (single, tandem, and tridem) per truck class, cloud cover percent, and road sections selected for the local calibration. Detailed descriptions of input parameters are explained in this paper, which leads to providing of an approach for successful implementation of MEPDG. Local calibration of MEPDG to the conditions of Riyadh region can be performed based on the findings in this paper.

Keywords: Mechanistic-empirical pavement design guide, traffic characteristics, materials properties, climate, Riyadh.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1219
192 Classification of Extreme Ground-Level Ozone Based on Generalized Extreme Value Model for Air Monitoring Station

Authors: Siti Aisyah Zakaria, Nor Azrita Mohd Amin, Noor Fadhilah Ahmad Radi, Nasrul Hamidin

Abstract:

Higher ground-level ozone (GLO) concentration adversely affects human health, vegetations as well as activities in the ecosystem. In Malaysia, most of the analysis on GLO concentration are carried out using the average value of GLO concentration, which refers to the centre of distribution to make a prediction or estimation. However, analysis which focuses on the higher value or extreme value in GLO concentration is rarely explored. Hence, the objective of this study is to classify the tail behaviour of GLO using generalized extreme value (GEV) distribution estimation the return level using the corresponding modelling (Gumbel, Weibull, and Frechet) of GEV distribution. The results show that Weibull distribution which is also known as short tail distribution and considered as having less extreme behaviour is the best-fitted distribution for four selected air monitoring stations in Peninsular Malaysia, namely Larkin, Pelabuhan Kelang, Shah Alam, and Tanjung Malim; while Gumbel distribution which is considered as a medium tail distribution is the best-fitted distribution for Nilai station. The return level of GLO concentration in Shah Alam station is comparatively higher than other stations. Overall, return levels increase with increasing return periods but the increment depends on the type of the tail of GEV distribution’s tail. We conduct this study by using maximum likelihood estimation (MLE) method to estimate the parameters at four selected stations in Peninsular Malaysia. Next, the validation for the fitted block maxima series to GEV distribution is performed using probability plot, quantile plot and likelihood ratio test. Profile likelihood confidence interval is tested to verify the type of GEV distribution. These results are important as a guide for early notification on future extreme ozone events.

Keywords: Extreme value theory, generalized extreme value distribution, ground-level ozone, return level.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 516
191 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kr. Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 429
190 Effects of Irrigation Scheduling and Soil Management on Maize (Zea mays L.) Yield in Guinea Savannah Zone of Nigeria

Authors: I. Alhassan, A. M. Saddiq, A. G. Gashua, K. K. Gwio-Kura

Abstract:

The main objective of any irrigation program is the development of an efficient water management system to sustain crop growth and development and avoid physiological water stress in the growing plants. Field experiment to evaluate the effects of some soil moisture conservation practices on yield and water use efficiency (WUE) of maize was carried out in three locations (i.e. Mubi and Yola in the northern Guinea Savannah and Ganye in the southern Guinea Savannah of Adamawa State, Nigeria) during the dry seasons of 2013 and 2014. The experiment consisted of three different irrigation levels (7, 10 and 12 day irrigation intervals), two levels of mulch (mulch and un-mulched) and two tillage practices (no tillage and minimum tillage) arranged in a randomized complete block design with split-split plot arrangement and replicated three times. The Blaney-Criddle method was used for measuring crop evapotranspiration. The results indicated that seven-day irrigation intervals and mulched treatment were found to have significant effect (P>0.05) on grain yield and water use efficiency in all the locations. The main effect of tillage was non-significant (P<0.05) on grain yield and WUE. The interaction effects of irrigation and mulch were significant (P>0.05) on grain yield and WUE at Mubi and Yola. Generally, higher grain yield and WUE were recorded on mulched and seven-day irrigation intervals, whereas lower values were recorded on un-mulched with 12-day irrigation intervals. Tillage exerts little influence on the yield and WUE. Results from Ganye were found to be generally higher than those recorded in Mubi and Yola; it also showed that an irrigation interval of 10 days with mulching could be adopted for the Ganye area, while seven days interval is more appropriate for Mubi and Yola.

Keywords: Irrigation, maize, mulching, tillage, guinea savannah.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1091
189 Wastewater Treatment in Moving-Bed Biofilm Reactor operated by Flow Reversal Intermittent Aeration System

Authors: B. K. Kim, D. Chang, D. J. Son, D. W. Kim, J. K. Choi, H. J. Yeon, C. Y. Yoon, Y. Fan, S. Y. Lim, K. H. Hong

Abstract:

Intermittent aeration process can be easily applied on the existing activated sludge system and is highly reliable against the loading changes. It can be operated in a relatively simple way as well. Since the moving-bed biofilm reactor method processes pollutants by attaching and securing the microorganisms on the media, the process efficiency can be higher compared to the suspended growth biological treatment process, and can reduce the return of sludge. In this study, the existing intermittent aeration process with alternating flow being applied on the oxidation ditch is applied on the continuous flow stirred tank reactor with advantages from both processes, and we would like to develop the process to significantly reduce the return of sludge in the clarifier and to secure the reliable quality of treated water by adding the moving media. Corresponding process has the appropriate form as an infrastructure based on u- environment in future u- City and is expected to accelerate the implementation of u-Eco city in conjunction with city based services. The system being conducted in a laboratory scale has been operated in HRT 8hours except for the final clarifier and showed the removal efficiency of 97.7 %, 73.1 % and 9.4 % in organic matters, TN and TP, respectively with operating range of 4hour cycle on system SRT 10days. After adding the media, the removal efficiency of phosphorus showed a similar level compared to that before the addition, but the removal efficiency of nitrogen was improved by 7~10 %. In addition, the solids which were maintained in MLSS 1200~1400 at 25 % of media packing were attached all onto the media, which produced no sludge entering the clarifier. Therefore, the return of sludge is not needed any longer.

Keywords: Municipal wastewater treatment, Biological nutrient removal, Alternating flow intermittent aeration system, Reversal flow intermittent aeration system, Moving-bed biofilm reactor, CFSTR, u-City, u-Eco city

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2322