Search results for: variable step size
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10111

Search results for: variable step size

9181 Classification of EEG Signals Based on Dynamic Connectivity Analysis

Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović

Abstract:

In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.

Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients

Procedia PDF Downloads 197
9180 In situ One-Step Synthesis of Graphene Quantum Dots-Metal Free and Zinc Phthalocyanines Conjugates: Investigation of Photophysicochemical Properties

Authors: G. Fomo, O. J. Achadu, T. Nyokong

Abstract:

Nanoconjugates of graphene quantum dots (GQDs) and 4-(tetrakis-5-(trifluoromethyl)-2-mercaptopyridinephthalocyanine (H₂Pc(OPyF₃)₄) or 4-(tetrakis-5-(trifluoromethyl)-2-mercaptopyridinephthalocyaninato) zinc (II) (ZnPc(OPyF₃)₄) were synthesized via a novel in situ one-step route. The bottom-up approach for the prepared conjugates could ensure the intercalation of the phthalocyanines (Pcs) directly onto the edges or surface of the GQDs and or non-covalent coordination using the π-electron systems of both materials. The as-synthesized GQDs and their Pcs conjugates were characterized using different spectroscopic techniques and their photophysicochemical properties evaluated. The singlet oxygen quantum yields of the Pcs in the presence of GQDs were enhanced due to Förster resonance energy transfer (FRET) occurrence within the conjugated hybrids. Hence, these nanoconjugates are potential materials for photodynamic therapy (PDT) and photocatalysis applications.

Keywords: graphene quantum dots, metal free fluorinated phthalocyanine, zinc fluorinated phthalocyanine, photophysicochemical properties

Procedia PDF Downloads 169
9179 Micro/Nano-Sized Emulsions Exhibit Antifungal Activity against Cucumber Downy Mildew

Authors: Kai-Fen Tu, Jenn-Wen Huang, Yao-Tung Lin

Abstract:

Cucumber is a major economic crop in the world. The global production of cucumber in 2017 was more than 71 million tonnes. Nonetheless, downy mildew, caused by Pseudoperonospora cubensis, is a devastating and common disease on cucumber in around 80 countries and causes severe economic losses. The long-term usage of fungicide also leads to the occurrence of fungicide resistance and decreases host resistance. In this study, six types of oil (neem oil, moringa oil, soybean oil, cinnamon oil, clove oil, and camellia oil) were selected to synthesize micro/nano-sized emulsions, and the disease control efficacy of micro/nano-sized emulsions were evaluated. Moreover, oil concentrations (0.125% - 1%) and droplet size of emulsion were studied. Results showed cinnamon-type emulsion had the best efficacy among these oils. The disease control efficacy of these emulsions increased as the oil concentration increased. Both disease incidence and disease severity were measured by detached leaf and pot experiment, respectively. For the droplet size effect, results showed that the 114 nm of droplet size synthesized by 0.25% cinnamon oil emulsion had the lowest disease incidence (6.67%) and lowest disease severity (33.33%). The release of zoospore was inhibited (5.33%), and the sporangia germination was damaged. These results suggest that cinnamon oil emulsion will be a valuable and environmentally friendly alternative to control cucumber downy mildew. The economic loss caused by plant disease could also be reduced.

Keywords: downy mildew, emulsion, oil droplet size, plant protectant

Procedia PDF Downloads 119
9178 Thermodynamic Analysis of Ventilated Façades under Operating Conditions in Southern Spain

Authors: Carlos A. Domínguez Torres, Antonio Domínguez Delgado

Abstract:

In this work we study the thermodynamic behavior of some ventilated facades under summer operating conditions in Southern Spain. Under these climatic conditions, indoor comfort implies a high energetic demand due to high temperatures that usually are reached in this season in the considered geographical area. The aim of this work is to determine if during summer operating conditions in Southern Spain, ventilated façades provide some energy saving compared to the non-ventilated façades and to deduce their behavior patterns in terms of energy efficiency. The modeling of the air flow in the channel has been performed by using Navier-Stokes equations for thermodynamic flows. Numerical simulations have been carried out with a 2D Finite Element approach. This way, we analyze the behavior of ventilated façades under different weather conditions as variable wind, variable temperature and different levels of solar irradiation. CFD computations show that the combined effect of the shading of the external wall and the ventilation by the natural convection into the air gap achieve a reduction of the heat load during the summer period. This reduction has been evaluated by comparing the thermodynamic performances of two ventilated and two unventilated façades with the same geometry and thermophysical characteristics.

Keywords: passive cooling, ventilated façades, energy-efficient building, CFD, FEM

Procedia PDF Downloads 340
9177 Study of Treatment Plant of The City Chlef Study of Environmental Impact

Authors: Houmame Benbouali, Aboubakr Gribi

Abstract:

The risks, in general, exist in any project, one can hardly carry out a project without taking risks. The hydraulic works are rather complex projects in their design, realization and exploitation and are often subjected at the multiple risks being able to influence with their good performance and can have a negative impact on their environment. The present study was carried out to quote the impacts caused by purification plant STEP Chlef on the environment, it aims has studied the environmental impacts during construction and when designing this STEP, it is divided into two parts: The first part results from a research task bibliographer which contain three chapters (- cleansing of water-worn- general information on water worn-proceed of purification of waste water). The second part is an experimental part which is divided into four chapters (detailed state initial description of the station of purification-evaluation of the impacts of the project analyzes measurements and recommendations).

Keywords: treatment plant, waste water, waste water treatment, Chlef

Procedia PDF Downloads 324
9176 Employee Engagement: Tool for Success of Higher Education in Thailand

Authors: Pooree Sakot, Marndarath Suksanga

Abstract:

Organizations are under increasing pressure to improve performance and maximize the contribution of every employee. Employee engagement has become an attractive business proposition. The triple bottom line consists of three Ps: profit, people and planet. It aims to measure the financial, social and environmental performance of the corporation over a period of time. People are the most important asset of every organization. Most of the studies suggest that employee engagement improves the bottom line in almost every instance and it is well worth all organizational efforts to actively engage employees. Engaged employees have an impact on productivity and financial performance. Efficient leadership and effective management can take place if emerging paradigm like employee engagement is appropriately understood and put into practice. Employee engagement starts at the first step i.e. recruitment of an employee to the last step i.e. retirement .The HR Practices of an organization play the most major role in helping the employees walk the extra mile. Effective employee engagement is the key component for improved organizational performance.

Keywords: employee engagement, higher education, tool, success

Procedia PDF Downloads 319
9175 Generalized Extreme Value Regression with Binary Dependent Variable: An Application for Predicting Meteorological Drought Probabilities

Authors: Retius Chifurira

Abstract:

Logistic regression model is the most used regression model to predict meteorological drought probabilities. When the dependent variable is extreme, the logistic model fails to adequately capture drought probabilities. In order to adequately predict drought probabilities, we use the generalized linear model (GLM) with the quantile function of the generalized extreme value distribution (GEVD) as the link function. The method maximum likelihood estimation is used to estimate the parameters of the generalized extreme value (GEV) regression model. We compare the performance of the logistic and the GEV regression models in predicting drought probabilities for Zimbabwe. The performance of the regression models are assessed using the goodness-of-fit tests, namely; relative root mean square error (RRMSE) and relative mean absolute error (RMAE). Results show that the GEV regression model performs better than the logistic model, thereby providing a good alternative candidate for predicting drought probabilities. This paper provides the first application of GLM derived from extreme value theory to predict drought probabilities for a drought-prone country such as Zimbabwe.

Keywords: generalized extreme value distribution, general linear model, mean annual rainfall, meteorological drought probabilities

Procedia PDF Downloads 187
9174 Influence of Surface Fault Rupture on Dynamic Behavior of Cantilever Retaining Wall: A Numerical Study

Authors: Partha Sarathi Nayek, Abhiparna Dasgupta, Maheshreddy Gade

Abstract:

Earth retaining structure plays a vital role in stabilizing unstable road cuts and slopes in the mountainous region. The retaining structures located in seismically active regions like the Himalayas may experience moderate to severe earthquakes. An earthquake produces two kinds of ground motion: permanent quasi-static displacement (fault rapture) on the fault rupture plane and transient vibration, traveling a long distance. There has been extensive research work to understand the dynamic behavior of retaining structures subjected to transient ground motions. However, understanding the effect caused by fault rapture phenomena on retaining structures is limited. The presence of shallow crustal active faults and natural slopes in the Himalayan region further highlights the need to study the response of retaining structures subjected to fault rupture phenomena. In this paper, an attempt has been made to understand the dynamic response of the cantilever retaining wall subjected to surface fault rupture. For this purpose, a 2D finite element model consists of a retaining wall, backfill and foundation have been developed using Abaqus 6.14 software. The backfill and foundation material are modeled as per the Mohr-Coulomb failure criterion, and the wall is modeled as linear elastic. In this present study, the interaction between backfill and wall is modeled as ‘surface-surface contact.’ The entire simulation process is divided into three steps, i.e., the initial step, gravity load step, fault rupture step. The interaction property between wall and soil and fixed boundary condition to all the boundary elements are applied in the initial step. In the next step, gravity load is applied, and the boundary elements are allowed to move in the vertical direction to incorporate the settlement of soil due to the gravity load. In the final step, surface fault rupture has been applied to the wall-backfill system. For this purpose, the foundation is divided into two blocks, namely, the hanging wall block and the footwall block. A finite fault rupture displacement is applied to the hanging wall part while the footwall bottom boundary is kept as fixed. Initially, a numerical analysis is performed considering the reverse fault mechanism with a dip angle of 45°. The simulated result is presented in terms of contour maps of permanent displacements of the wall-backfill system. These maps highlighted that surface fault rupture can induce permanent displacement in both horizontal and vertical directions, which can significantly influence the dynamic behavior of the wall-backfill system. Further, the influence of fault mechanism, dip angle, and surface fault rupture position is also investigated in this work.

Keywords: surface fault rupture, retaining wall, dynamic response, finite element analysis

Procedia PDF Downloads 100
9173 Non-linear Model of Elasticity of Compressive Strength of Concrete

Authors: Charles Horace Ampong

Abstract:

Non-linear models have been found to be useful in modeling the elasticity (measure of degree of responsiveness) of a dependent variable with respect to a set of independent variables ceteris paribus. This constant elasticity principle was applied to the dependent variable (Compressive Strength of Concrete in MPa) which was found to be non-linearly related to the independent variable (Water-Cement ratio in kg/m3) for given Ages of Concrete in days (3, 7, 28) at different levels of admixtures Superplasticizer (in kg/m3), Blast Furnace Slag (in kg/m3) and Fly Ash (in kg/m3). The levels of the admixtures were categorized as: S1=Some Plasticizer added & S0=No Plasticizer added; B1=some Blast Furnace Slag added & B0=No Blast Furnace Slag added; F1=Some Fly Ash added & F0=No Fly Ash added. The number of observations (samples) used for the research was one-hundred and thirty-two (132) in all. For Superplasticizer, it was found that Compressive Strength of Concrete was more elastic with regards to Water-Cement ratio at S1 level than at S0 level for the given ages of concrete 3, 7and 28 days. For Blast Furnace Slag, Compressive Strength with regards to Water-Cement ratio was more elastic at B0 level than at B1 level for concrete ages 3, 7 and 28 days. For Fly Ash, Compressive Strength with regards to Water-Cement ratio was more elastic at B0 level than at B1 level for Ages 3, 7 and 28 days. The research also tested for different combinations of the levels of Superplasticizer, Blast Furnace Slag and Fly Ash. It was found that Compressive Strength elasticity with regards to Water-Cement ratio was lowest (Elasticity=-1.746) with a combination of S0, B0 and F0 for concrete age of 3 days. This was followed by Elasticity of -1.611 with a combination of S0, B0 and F0 for a concrete of age 7 days. Next, the highest was an Elasticity of -1.414 with combination of S0, B0 and F0 for a concrete age of 28 days. Based on preceding outcomes, three (3) non-linear model equations for predicting the output elasticity of Compressive Strength of Concrete (in %) or the value of Compressive Strength of Concrete (in MPa) with regards to Water to Cement was formulated. The model equations were based on the three different ages of concrete namely 3, 7 and 28 days under investigation. The three models showed that higher elasticity translates into higher compressive strength. And the models revealed a trend of increasing concrete strength from 3 to 28 days for a given amount of water to cement ratio. Using the models, an increasing modulus of elasticity from 3 to 28 days was deduced.

Keywords: concrete, compressive strength, elasticity, water-cement

Procedia PDF Downloads 285
9172 Comparison of Gait Variability in Individuals with Trans-Tibial and Trans-Femoral Lower Limb Loss: A Pilot Study

Authors: Hilal Keklicek, Fatih Erbahceci, Elif Kirdi, Ali Yalcin, Semra Topuz, Ozlem Ulger, Gul Sener

Abstract:

Objectives and Goals: The stride-to-stride fluctuations in gait is a determinant of qualified locomotion as known as gait variability. Gait variability is an important predictive factor of fall risk and useful for monitoring the effects of therapeutic interventions and rehabilitation. Comparison of gait variability in individuals with trans-tibial lower limb loss and trans femoral lower limb loss was the aim of the study. Methods: Ten individuals with traumatic unilateral trans femoral limb loss(TF), 12 individuals with traumatic transtibial lower limb loss(TT) and 12 healthy individuals(HI) were the participants of the study. All participants were evaluated with treadmill. Gait characteristics including mean step length, step length variability, ambulation index, time on each foot of participants were evaluated with treadmill. Participants were walked at their preferred speed for six minutes. Data from 4th minutes to 6th minutes were selected for statistical analyses to eliminate learning effect. Results: There were differences between the groups in intact limb step length variation, time on each foot, ambulation index and mean age (p < .05) according to the Kruskal Wallis Test. Pairwise analyses showed that there were differences between the TT and TF in residual limb variation (p=.041), time on intact foot (p=.024), time on prosthetic foot(p=.024), ambulation index(p = .003) in favor of TT group. There were differences between the TT and HI group in intact limb variation (p = .002), time on intact foot (p<.001), time on prosthetic foot (p < .001), ambulation index result (p < .001) in favor of HI group. There were differences between the TF and HI group in intact limb variation (p = .001), time on intact foot (p=.01) ambulation index result (p < .001) in favor of HI group. There was difference between the groups in mean age result from HI group were younger (p < .05).There were similarity between the groups in step lengths (p>.05) and time of prosthesis using in individuals with lower limb loss (p > .05). Conclusions: The pilot study provided basic data about gait stability in individuals with traumatic lower limb loss. Results of the study showed that to evaluate the gait differences between in different amputation level, long-range gait analyses methods may be useful to get more valuable information. On the other hand, similarity in step length may be resulted from effective prosthetic using or effective gait rehabilitation, in conclusion, all participants with lower limb loss were already trained. The differences between the TT and HI; TF and HI may be resulted from the age related features, therefore, age matched population in HI were recommended future studies. Increasing the number of participants and comparison of age-matched groups also recommended to generalize these result.

Keywords: lower limb loss, amputee, gait variability, gait analyses

Procedia PDF Downloads 272
9171 Battery Control with Moving Average Algorithm to Smoothen the Intermittent Output Power of Photovoltaic Solar Power Plants in Off-Grid Configuration

Authors: Muhammad Gillfran Samual, Rinaldy Dalimi, Fauzan Hanif Jufri, Budi Sudiarto, Ismi Rosyiana Fitri

Abstract:

Solar energy is increasingly recognized as an important future energy source due to its abundant availability and renewable nature. However, the intermittent nature of solar energy can cause fluctuations in the electricity produced, making it difficult to guarantee a stable and reliable electricity supply. One solution that can be implemented is to use batteries in a photovoltaic solar power plant system with a Moving Average control algorithm, which can help smooth and reduce fluctuations in solar power output power. The parameter that can be adjusted in the Moving Average algorithm is the window size or the arithmetic average width of the photovoltaic output power over time. This research evaluates the effect of a change of window size parameter in the Moving Average algorithm on the resulting smoothed photovoltaic output power and the technical effects on batteries, i.e., power and energy usage. Based on the evaluation, it is found that the increase of window size parameter will slow down the response of photovoltaic output power to changes in irradiation and increase the smoothing quality of the intermittent photovoltaic output power. In addition, increasing the window size will reduce the maximum power received on the load side, and the amount of energy used by the battery during the power smoothing process will increase, which, in turn, increases the required battery capacity.

Keywords: battery, intermittent, moving average, photovoltaic, power smoothing

Procedia PDF Downloads 39
9170 A Study of Mode Choice Model Improvement Considering Age Grouping

Authors: Young-Hyun Seo, Hyunwoo Park, Dong-Kyu Kim, Seung-Young Kho

Abstract:

The purpose of this study is providing an improved mode choice model considering parameters including age grouping of prime-aged and old age. In this study, 2010 Household Travel Survey data were used and improper samples were removed through the analysis. Chosen alternative, date of birth, mode, origin code, destination code, departure time, and arrival time are considered from Household Travel Survey. By preprocessing data, travel time, travel cost, mode, and ratio of people aged 45 to 55 years, 55 to 65 years and over 65 years were calculated. After the manipulation, the mode choice model was constructed using LIMDEP by maximum likelihood estimation. A significance test was conducted for nine parameters, three age groups for three modes. Then the test was conducted again for the mode choice model with significant parameters, travel cost variable and travel time variable. As a result of the model estimation, as the age increases, the preference for the car decreases and the preference for the bus increases. This study is meaningful in that the individual and households characteristics are applied to the aggregate model.

Keywords: age grouping, aging, mode choice model, multinomial logit model

Procedia PDF Downloads 315
9169 Application of Analytical Method for Placement of DG Unit for Loss Reduction in Distribution Systems

Authors: G. V. Siva Krishna Rao, B. Srinivasa Rao

Abstract:

The main aim of the paper is to implement a technique using distributed generation in distribution systems to reduce the distribution system losses and to improve voltage profiles. The fuzzy logic technique is used to select the proper location of DG and an analytical method is proposed to calculate the size of DG unit at any power factor. The optimal sizes of DG units are compared with optimal sizes obtained using the genetic algorithm. The suggested method is programmed under Matlab software and is tested on IEEE 33 bus system and the results are presented.

Keywords: DG Units, sizing of DG units, analytical methods, optimum size

Procedia PDF Downloads 464
9168 Efficacy of Microbial Metabolites Obtained from Saccharomyces cerevisiae as Supplement for Quality Milk Production in Dairy Cows

Authors: Sajjad ur Rahman, Mariam Azam, Mukarram Bashir, Seemal Javaid, Aoun Muhammad, Muhammad Tahir, Jawad, Hannan Khan, Muhammad Zohaib

Abstract:

Partially fermented soya hulls and wheat bran through Saccharomyces cerevisiae (DL-22 S/N) substantiated as a natural source for quality milk production. Saccharomyces cerevisiae (DL-22 S/N) were grown under in-vivo conditions and processed through two-step fermentation with substrates. The extra pure metabolites (XPM) were dried and processed for maintaining 1mm mesh size particles for supplementation of pelleted feed. Two groups of a cow (Holstein Friesian) having 8 animals of similar age and lactation were given the experimental concentrates. Group A was fed daily with 12gm of XPM and 22% protein-pelleted feed, while Group B was provided with no metabolites in their feed. In thirty-nine days of trial, improvement in the overall health, body score, milk protein, milk fat, ash, and solid not fat (SNF), yield, and incidence rate of mastitis was observed. The collected data revealed an improvement in milk production of 2.02 liter/h/d. However, a reduction (3.75%) in the milk fats and an increase in the milk SNF was around 0.58%. The ash content ranged between 6.4-7.5%. The incidence of mastitis was reduced to less than 2%.

Keywords: microbial metabolites, Saccharomyces cerevisiae, milk production, fermentation, post-biotic metabolites, immunity

Procedia PDF Downloads 76
9167 The Effect of Diet Intervention for Breast Cancer: A Meta-Analysis

Authors: Bok Yae Chung, Eun Hee Oh

Abstract:

Breast cancer patients require more nutritional interventions than others. However, a few studies have attempted to assess the overall nutritional status, to reduce body weight and BMI by improving diet, and to improve the prognosis of cancer for breast cancer patients. The purpose of this study was to evaluate the effect of diet intervention in the breast cancer patients through meta-analysis. For the study purpose, 16 studies were selected by using PubMed, ScienceDirect, ProQuest and CINAHL. Meta-analysis was performed using a random-effects model, and the effect size on outcome variables in breast cancer was calculated. The effect size for outcome variables of diet intervention was a large effect size. For heterogeneity, moderator analysis was performed using intervention type and intervention duration. All moderators did not significant difference. Diet intervention has significant positive effects on outcome variables in breast cancer. As a result, it is suggested that the timing of the intervention should be no more than six months, but a strategy for sustaining long-term intervention effects should be added if nutritional intervention is to be administered for breast cancer patients in the future.

Keywords: breast cancer, diet, mete-analysis, intervention

Procedia PDF Downloads 420
9166 The Twelfth Rib as a Landmark for Surgery

Authors: Jake Tempo, Georgina Williams, Iain Robertson, Claire Pascoe, Darren Rama, Richard Cetti

Abstract:

Introduction: The twelfth rib is commonly used as a landmark for surgery; however, its variability in length has not been formally studied. The highly variable rib length provides a challenge for urologists seeking a consistent landmark for percutaneous nephrolithotomy and retroperitoneoscopic surgery. Methods and materials: We analysed CT scans of 100 adults who had imaging between 23rd March and twelfth April 2020 at an Australian Hospital. We measured the distance from the mid-sagittal line to the twelfth rib tip in the axial plane as a surrogate for true rib length. We also measured the distance from the twelfth rib tip to the kidney, spleen, and liver. Results: Length from the mid-sagittal line to the right twelfth rib tip varied from 46 (percentile 95%CI 40 to 57) to 136mm (percentile 95%CI 133 to 138). On the left, the distances varied from 55 (percentile 95%CI 50 to 64) to 134mm (percentile 95%CI 131 to 135). Twenty-three percent of people had an organ lying between the tip of the twelfth rib and the kidney on the right, and 11% of people had the same finding on the left. Conclusion: The twelfth rib is highly variable in its length. Similar variability was recorded in the distance from the tip to intra-abdominal organs. Due to the frequency of organs lying between the tip of the rib and the kidney, it should not be used as a landmark for accessing the kidney without prior knowledge of an individual patient’s anatomy, as seen on imaging.

Keywords: PCNL, rib, anatomy, nephrolithotomy

Procedia PDF Downloads 101
9165 Application of Fuzzy TOPSIS in Evaluating Green Transportation Options for Dhaka Megacity

Authors: Md. Moniruzzaman, Thirayoot Limanond

Abstract:

Being the most visible indicator, the transport system of a city points out how developed the city is. Dhaka megacity holds a mixed composition of motorized and non-motorized modes of transport and the number of vehicle figure is escalating over times. And this obviously poses associated environmental costs like air pollution, noise etc. which is degrading the quality of life in the city. Eventually sustainable transport or more importantly green transport from environmental point of view has become a prime choice to the transport professionals in order to cope up the crisis. Currently the city authority is planning to execute such sustainable transport systems that could serve the pressing demand of the present and meet the future needs effectively. This study focuses on the selection and evaluation of green transportation systems among potential alternatives on a priority basis. In this paper, Fuzzy TOPSIS - a multi-criteria decision method is presented to find out the most prioritized alternative. In the first step, Twenty-one individual specific criteria for sustainability assessment are selected. In the following step, experts provide linguistic ratings to the potential alternatives with respect to the selected criteria. The approach is used to generate aggregate scores for sustainability assessment and selection of the best alternative. In the third step, a sensitivity analysis is performed to understand the influence of criteria weights on the decision making process. The key strength of fuzzy TOPSIS approach is its practical applicability having a generation of good quality solution even under uncertainty.

Keywords: green transport, multi-criteria decision approach, urban transportation system, sustainability assessment, fuzzy theory, uncertainty

Procedia PDF Downloads 278
9164 Recovery of Metals from Electronic Waste by Physical and Chemical Recycling Processes

Authors: Muammer Kaya

Abstract:

The main purpose of this article is to provide a comprehensive review of various physical and chemical processes for electronic waste (e-waste) recycling, their advantages and shortfalls towards achieving a cleaner process of waste utilization, with especial attention towards extraction of metallic values. Current status and future perspectives of waste printed circuit boards (PCBs) recycling are described. E-waste characterization, dismantling/ disassembly methods, liberation and classification processes, composition determination techniques are covered. Manual selective dismantling and metal-nonmetal liberation at – 150 µm at two step crushing are found to be the best. After size reduction, mainly physical separation/concentration processes employing gravity, electrostatic, magnetic separators, froth floatation etc., which are commonly used in mineral processing, have been critically reviewed here for separation of metals and non-metals, along with useful utilizations of the non-metallic materials. The recovery of metals from e-waste material after physical separation through pyrometallurgical, hydrometallurgical or biohydrometallurgical routes is also discussed along with purification and refining and some suitable flowsheets are also given. It seems that hydrometallurgical route will be a key player in the base and precious metals recoveries from e-waste. E-waste recycling will be a very important sector in the near future from economic and environmental perspectives.

Keywords: e-waste, WEEE, recycling, metal recovery, hydrometallurgy, pirometallurgy, biometallurgy

Procedia PDF Downloads 335
9163 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 146
9162 Particle Size Distribution Estimation of a Mixture of Regular and Irregular Sized Particles Using Acoustic Emissions

Authors: Ejay Nsugbe, Andrew Starr, Ian Jennions, Cristobal Ruiz-Carcel

Abstract:

This works investigates the possibility of using Acoustic Emissions (AE) to estimate the Particle Size Distribution (PSD) of a mixture of particles that comprise of particles of different densities and geometry. The experiments carried out involved the mixture of a set of glass and polyethylene particles that ranged from 150-212 microns and 150-250 microns respectively and an experimental rig that allowed the free fall of a continuous stream of particles on a target plate which the AE sensor was placed. By using a time domain based multiple threshold method, it was observed that the PSD of the particles in the mixture could be estimated.

Keywords: acoustic emissions, particle sizing, process monitoring, signal processing

Procedia PDF Downloads 339
9161 Managing of Work Risk in Small and Medium-Size Companies

Authors: Janusz K. Grabara, Bartłomiej Okwiet, Sebastian Kot

Abstract:

The purpose of the article is presentation and analysis of the aspect of job security in small and medium-size enterprises in Poland with reference to other EU countries. We show the theoretical aspects of the risk with reference to managing small and medium enterprises, next risk management in small and medium enterprises in Poland, which were subjected to a detailed analysis. We show in detail the risk associated with the operation of the mentioned above companies, as well as analyses its levels on various stages and for different kinds of conducted activity.

Keywords: job safety, SME, work risk, risk management

Procedia PDF Downloads 484
9160 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 128
9159 Impact of Stack Caches: Locality Awareness and Cost Effectiveness

Authors: Abdulrahman K. Alshegaifi, Chun-Hsi Huang

Abstract:

Treating data based on its location in memory has received much attention in recent years due to its different properties, which offer important aspects for cache utilization. Stack data and non-stack data may interfere with each other’s locality in the data cache. One of the important aspects of stack data is that it has high spatial and temporal locality. In this work, we simulate non-unified cache design that split data cache into stack and non-stack caches in order to maintain stack data and non-stack data separate in different caches. We observe that the overall hit rate of non-unified cache design is sensitive to the size of non-stack cache. Then, we investigate the appropriate size and associativity for stack cache to achieve high hit ratio especially when over 99% of accesses are directed to stack cache. The result shows that on average more than 99% of stack cache accuracy is achieved by using 2KB of capacity and 1-way associativity. Further, we analyze the improvement in hit rate when adding small, fixed, size of stack cache at level1 to unified cache architecture. The result shows that the overall hit rate of unified cache design with adding 1KB of stack cache is improved by approximately, on average, 3.9% for Rijndael benchmark. The stack cache is simulated by using SimpleScalar toolset.

Keywords: hit rate, locality of program, stack cache, stack data

Procedia PDF Downloads 293
9158 Optimizing the Probabilistic Neural Network Training Algorithm for Multi-Class Identification

Authors: Abdelhadi Lotfi, Abdelkader Benyettou

Abstract:

In this work, a training algorithm for probabilistic neural networks (PNN) is presented. The algorithm addresses one of the major drawbacks of PNN, which is the size of the hidden layer in the network. By using a cross-validation training algorithm, the number of hidden neurons is shrunk to a smaller number consisting of the most representative samples of the training set. This is done without affecting the overall architecture of the network. Performance of the network is compared against performance of standard PNN for different databases from the UCI database repository. Results show an important gain in network size and performance.

Keywords: classification, probabilistic neural networks, network optimization, pattern recognition

Procedia PDF Downloads 249
9157 Quantitative Assessment of Soft Tissues by Statistical Analysis of Ultrasound Backscattered Signals

Authors: Da-Ming Huang, Ya-Ting Tsai, Shyh-Hau Wang

Abstract:

Ultrasound signals backscattered from the soft tissues are mainly depending on the size, density, distribution, and other elastic properties of scatterers in the interrogated sample volume. The quantitative analysis of ultrasonic backscattering is frequently implemented using the statistical approach due to that of backscattering signals tends to be with the nature of the random variable. Thus, the statistical analysis, such as Nakagami statistics, has been applied to characterize the density and distribution of scatterers of a sample. Yet, the accuracy of statistical analysis could be readily affected by the receiving signals associated with the nature of incident ultrasound wave and acoustical properties of samples. Thus, in the present study, efforts were made to explore such effects as the ultrasound operational modes and attenuation of biological tissue on the estimation of corresponding Nakagami statistical parameter (m parameter). In vitro measurements were performed from healthy and pathological fibrosis porcine livers using different single-element ultrasound transducers and duty cycles of incident tone burst ranging respectively from 3.5 to 7.5 MHz and 10 to 50%. Results demonstrated that the estimated m parameter tends to be sensitively affected by the use of ultrasound operational modes as well as the tissue attenuation. The healthy and pathological tissues may be characterized quantitatively by m parameter under fixed measurement conditions and proper calibration.

Keywords: ultrasound backscattering, statistical analysis, operational mode, attenuation

Procedia PDF Downloads 311
9156 An Artificial Neural Network Model Based Study of Seismic Wave

Authors: Hemant Kumar, Nilendu Das

Abstract:

A study based on ANN structure gives us the information to predict the size of the future in realizing a past event. ANN, IMD (Indian meteorological department) data and remote sensing were used to enable a number of parameters for calculating the size that may occur in the future. A threshold selected specifically above the high-frequency harvest reached the area during the selected seismic activity. In the field of human and local biodiversity it remains to obtain the right parameter compared to the frequency of impact. But during the study the assumption is that predicting seismic activity is a difficult process, not because of the parameters involved here, which can be analyzed and funded in research activity.

Keywords: ANN, Bayesion class, earthquakes, IMD

Procedia PDF Downloads 115
9155 Language Shapes Thought: An Experimental Study on English and Mandarin Native Speakers' Sequencing of Size

Authors: Hsi Wei

Abstract:

Does the language we speak affect the way we think? This question has been discussed for a long time from different aspects. In this article, the issue is examined with an experiment on how speakers of different languages tend to do different sequencing when it comes to the size of general objects. An essential difference between the usage of English and Mandarin is the way we sequence the size of places or objects. In English, when describing the location of something we may say, for example, ‘The pen is inside the trashcan next to the tree at the park.’ In Mandarin, however, we would say, ‘The pen is at the park next to the tree inside the trashcan.’ It’s clear that generally English use the sequence of small to big while Mandarin the opposite. Therefore, the experiment was conducted to test if the difference of the languages affects the speakers’ ability to do the different sequencing. There were two groups of subjects; one consisted of English native speakers, another of Mandarin native speakers. Within the experiment, three nouns were showed as a group to the subjects as their native languages. Before they saw the nouns, they would first get an instruction of ‘big to small’, ‘small to big’, or ‘repeat’. Therefore, the subjects had to sequence the following group of nouns as the instruction they get or simply repeat the nouns. After completing every sequencing and repetition in their minds, they pushed a button as reaction. The repetition design was to gather the mere reading time of the person. As the result of the experiment showed, English native speakers reacted more quickly to the sequencing of ‘small to big’; on the other hand, Mandarin native speakers reacted more quickly to the sequence ‘big to small’. To conclude, this study may be of importance as a support for linguistic relativism that the language we speak do shape the way we think.

Keywords: language, linguistic relativism, size, sequencing

Procedia PDF Downloads 274
9154 Proactive Competence Management for Employees: A Bottom-up Process Model for Developing Target Competence Profiles Based on the Employee's Tasks

Authors: Maximilian Cedzich, Ingo Dietz Von Bayer, Roland Jochem

Abstract:

In order for industrial companies to continue to succeed in dynamic, globalized markets, they must be able to train their employees in an agile manner and at short notice in line with the exogenous conditions that arise. For this purpose, it is indispensable to operate a proactive competence management system for employees that recognizes qualification needs timely in order to be able to address them promptly through qualification measures. However, there are hardly any approaches to be found in the literature that includes systematic, proactive competence management. In order to help close this gap, this publication presents a process model that systematically develops bottom-up, future-oriented target competence profiles based on the tasks of the employees. Concretely, in the first step, the tasks of the individual employees are examined for assumed future conditions. In other words, qualitative scenarios are considered for the individual tasks to determine how they are likely to change. In a second step, these scenario-based future tasks are translated into individual future-related target competencies of the employee using a matrix of generic task properties. The final step pursues the goal of validating the target competence profiles formed in this way within the framework of a management workshop. This process model provides industrial companies with a tool that they can use to determine the competencies required by their own employees in the future and compare them with the actual prevailing competencies. If gaps are identified between the target and the actual, these qualification requirements can be closed in the short term by means of qualification measures.

Keywords: dynamic globalized markets, employee competence management, industrial companies, knowledge management

Procedia PDF Downloads 183
9153 Contemplating Charge Transport by Modeling of DNA Nucleobases Based Nano Structures

Authors: Rajan Vohra, Ravinder Singh Sawhney, Kunwar Partap Singh

Abstract:

Electrical charge transport through two basic strands thymine and adenine of DNA have been investigated and analyzed using the jellium model approach. The FFT-2D computations have been performed for semi-empirical Extended Huckel Theory using atomistic tool kit to contemplate the charge transport metrics like current and conductance. The envisaged data is further evaluated in terms of transmission spectrum, HOMO-LUMO Gap and number of electrons. We have scrutinized the behavior of the devices in the range of -2V to 2V for a step size of 0.2V. We observe that both thymine and adenine can act as molecular devices when sandwiched between two gold probes. A prominent observation is a drop in HLGs of adenine and thymine when working as a device as compared to their intrinsic values and this is comparative more visible in case of adenine. The current in the thymine based device exhibit linear increase with voltage in spite of having low conductance. Further, the broader transmission peaks represent the strong coupling of electrodes to the scattering molecule (thymine). Moreover, the observed current in case of thymine is almost 3-4 times than that of observed for adenine. The NDR effect has been perceived in case of adenine based device for higher bias voltages and can be utilized in various future electronics applications.

Keywords: adenine, DNA, extended Huckel, thymine, transmission spectra

Procedia PDF Downloads 144
9152 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm

Authors: Belgherbi Aicha, Bessaid Abdelhafid

Abstract:

In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 314