Search results for: Vanadis 6 tool steel
1971 A Fast Community Detection Algorithm
Authors: Chung-Yuan Huang, Yu-Hsiang Fu, Chuen-Tsai Sun
Abstract:
Community detection represents an important data-mining tool for analyzing and understanding real-world complex network structures and functions. We believe that at least four criteria determine the appropriateness of a community detection algorithm: (a) it produces useable normalized mutual information (NMI) and modularity results for social networks, (b) it overcomes resolution limitation problems associated with synthetic networks, (c) it produces good NMI results and performance efficiency for Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks, and (d) it produces good modularity and performance efficiency for large-scale real-world complex networks. To our knowledge, no existing community detection algorithm meets all four criteria. In this paper, we describe a simple hierarchical arc-merging (HAM) algorithm that uses network topologies and rule-based arc-merging strategies to identify community structures that satisfy the criteria. We used five well-studied social network datasets and eight sets of LFR benchmark networks to validate the ground-truth community correctness of HAM, eight large-scale real-world complex networks to measure its performance efficiency, and two synthetic networks to determine its susceptibility to resolution limitation problems. Our results indicate that the proposed HAM algorithm is capable of providing satisfactory performance efficiency and that HAM-identified communities were close to ground-truth communities in social and LFR benchmark networks while overcoming resolution limitation problems.Keywords: complex network, social network, community detection, network hierarchy
Procedia PDF Downloads 2271970 The Influence of Coarse Aggregate Morphology on Concrete Workability: A Case Study with Algerian Crushed Limestone
Authors: Ahmed Boufedah Badissi, Ahmed Beroual, Farid Boursas
Abstract:
This research aims to elucidate the role of coarse aggregate in influencing the fresh properties of normal-strength concrete. Specifically, it is aimed to identify the optimal gradation of coarse aggregate to enhance workability. While existing literature discusses the impact of aggregate granularity on concrete workability, more numerical data or models need to quantify the relationship between workability, granularity, and coarse aggregate shape. The main objective is to create a model that describes how coarse aggregate morphology contributes to fresh concrete properties. To investigate the effect of coarse aggregate gradation on Normal Strength Concrete (NSC) workability, various combinations of coarse aggregates (4/22.4 mm) were produced in the laboratory, utilizing three elementary classes: finer coarse aggregate 4/8 mm (Fca), medium coarse aggregate 8/16 mm (Mca), and coarser coarse aggregate 16/22.4 mm (Cca). We introduced a factor, FCR (Finer to Coarser coarse aggregate Ratio), as a numerical parameter to provide a quantitative evaluation and more detailed results analysis. Quantitative characterization parameters for coarse aggregate morphology were established, exploring the influence of particle size distribution, specific surface, and aggregate shape on workability. The research findings are significant for establishing correlations between coarse aggregate morphology and concrete properties. FCR emerges as a valuable tool for predicting the impact of aggregate gradation variations on concrete. The results of this study create a valuable database for construction professionals and concrete producers, affirming that the fresh properties of NSC are intricately linked to coarse aggregate morphology, particularly gradation.Keywords: morphology, coarse aggregate, workability, fresh properties, gradation
Procedia PDF Downloads 621969 Timetabling for Interconnected LRT Lines: A Package Solution Based on a Real-world Case
Authors: Huazhen Lin, Ruihua Xu, Zhibin Jiang
Abstract:
In this real-world case, timetabling the LRT network as a whole is rather challenging for the operator: they are supposed to create a timetable to avoid various route conflicts manually while satisfying a given interval and the number of rolling stocks, but the outcome is not satisfying. Therefore, the operator adopts a computerised timetabling tool, the Train Plan Maker (TPM), to cope with this problem. However, with various constraints in the dual-line network, it is still difficult to find an adequate pairing of turnback time, interval and rolling stocks’ number, which requires extra manual intervention. Aiming at current problems, a one-off model for timetabling is presented in this paper to simplify the procedure of timetabling. Before the timetabling procedure starts, this paper presents how the dual-line system with a ring and several branches is turned into a simpler structure. Then, a non-linear programming model is presented in two stages. In the first stage, the model sets a series of constraints aiming to calculate a proper timing for coordinating two lines by adjusting the turnback time at termini. Then, based on the result of the first stage, the model introduces a series of inequality constraints to avoid various route conflicts. With this model, an analysis is conducted to reveal the relation between the ratio of trains in different directions and the possible minimum interval, observing that the more imbalance the ratio is, the less possible to provide frequent service under such strict constraints.Keywords: light rail transit (LRT), non-linear programming, railway timetabling, timetable coordination
Procedia PDF Downloads 871968 Exchange Rate, Market Size and Human Capital Nexus Foreign Direct Investment: A Bound Testing Approach for Pakistan
Authors: Naveed Iqbal Chaudhry, Mian Saqib Mehmood, Asif Mehmood
Abstract:
This study investigates the motivators of foreign direct investment (FDI) which will provide a panacea tool and ground breaking results related to it in case of Pakistan. The study considers exchange rate, market size and human capital as the motivators for attracting FDI. In this regard, time series data on annual basis has been collected for the period 1985–2010 and an Augmented Dickey–Fuller (ADF) and Phillips–Perron (PP) unit root tests are utilized to determine the stationarity of the variables. A bound testing approach to co-integration was applied because the variables included in the model are at I(1) – first level stationary. The empirical findings of this study confirm the long run relationship among the variables. However, market size and human capital have strong positive and significant impact, in short and long-run, for attracting FDI but exchange rate shows negative impact in this regard. The significant negative coefficient of the ECM indicates that it converges towards equilibrium. CUSUM and CUSUMSQ tests plots are with in the lines of critical value, which indicates the stability of the estimated parameters. However, this model can be used by Pakistan in policy and decision making. For achieving higher economic growth and economies of scale, the country should concentrate on the ingredients of this study so that it could attract more FDI as compared to the other countries.Keywords: ARDL, CUSUM and CUSUMSQ tests, ECM, exchange rate, FDI, human capital, market size, Pakistan
Procedia PDF Downloads 3921967 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites
Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer
Abstract:
Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation
Procedia PDF Downloads 2391966 The Discussions of Love, Determinism, and Providence in Ibn Sina (Avicenna) and al-Kirmani
Authors: Maria De Cillis
Abstract:
This paper addresses the subject of love in two of the most prominent Islamic philosophers: Ibn Sīnā (known in the Latin World as Avicenna d. 1037) Avicenna and al-Kirmānī (DC 1021). By surveying the connection that the concept of love entertains with the notions of divine providence and determinism in the luminaries’ theoretical systems, the present paper highlights differences and similarities in their respective approaches to the subjects. Through a thorough analysis of primary and secondary literature, it will be shown that Avicenna’s thought, which is mainly informed by the Aristotelian and Farābīan metaphysical and cosmological stances, is also integrated with mystical underpinnings. Particularly, in Avicenna’s Risāla fī’l-ʿishq love becomes the expression of the divine providence which operates through the intellectual striving the souls undertake in their desire to return to their First Cause. Love is also portrayed as an instrument helping the divine decree to remain unadulterated by way of keeping existing beings within their species and genera as well as an instrument which is employed by God to know and be known. This paper also discusses that if on the one hand, al-Kirmānī speaks of love as the Aristotelian and Farābian motive-force spurring existents to achieve perfection and as a tool which facilitates the status quo of divine creation, on the other hand, he remains steadily positioned within Ismā‘īlī and Neoplatonic paradigms: the return of all loving-beings to their Source is interrupted at the level of the first Intellect, whilst God remains inaccessible and ineffable. By investigating his opus magnum, the Rāḥat al-ʿaql, we shall highlight how al-Kirmānī also emphasizes the notion of divine providence which allows humans to attain their ultimate completeness by following the teachings of the Imams, repositories of the knowledge necessary to serve the unreachable deity.Keywords: Avicenna, determinism, love, al-Kirmani, Ismaili philosophy
Procedia PDF Downloads 2321965 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition
Authors: Aisultan Shoiynbek, Darkhan Kuanyshbay, Paulo Menezes, Akbayan Bekarystankyzy, Assylbek Mukhametzhanov, Temirlan Shoiynbek
Abstract:
Speech emotion recognition (SER) has received increasing research interest in recent years. It is a common practice to utilize emotional speech collected under controlled conditions recorded by actors imitating and artificially producing emotions in front of a microphone. There are four issues related to that approach: emotions are not natural, meaning that machines are learning to recognize fake emotions; emotions are very limited in quantity and poor in variety of speaking; there is some language dependency in SER; consequently, each time researchers want to start work with SER, they need to find a good emotional database in their language. This paper proposes an approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describes the sequence of actions involved in the proposed approach. One of the first objectives in the sequence of actions is the speech detection issue. The paper provides a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To investigate the working capacity of the developed model, an analysis of speech detection and extraction from real tasks has been performed.Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset
Procedia PDF Downloads 261964 Robotic Arm-Automated Spray Painting with One-Shot Object Detection and Region-Based Path Optimization
Authors: Iqraq Kamal, Akmal Razif, Sivadas Chandra Sekaran, Ahmad Syazwan Hisaburi
Abstract:
Painting plays a crucial role in the aerospace manufacturing industry, serving both protective and cosmetic purposes for components. However, the traditional manual painting method is time-consuming and labor-intensive, posing challenges for the sector in achieving higher efficiency. Additionally, the current automated robot path planning has been a bottleneck for spray painting processes, as typical manual teaching methods are time-consuming, error-prone, and skill-dependent. Therefore, it is essential to develop automated tool path planning methods to replace manual ones, reducing costs and improving product quality. Focusing on flat panel painting in aerospace manufacturing, this study aims to address issues related to unreliable part identification techniques caused by the high-mixture, low-volume nature of the industry. The proposed solution involves using a spray gun and a UR10 robotic arm with a vision system that utilizes one-shot object detection (OS2D) to identify parts accurately. Additionally, the research optimizes path planning by concentrating on the region of interest—specifically, the identified part, rather than uniformly covering the entire painting tray.Keywords: aerospace manufacturing, one-shot object detection, automated spray painting, vision-based path optimization, deep learning, automation, robotic arm
Procedia PDF Downloads 821963 Investigation of Residual Stress Relief by in-situ Rolling Deposited Bead in Directed Laser Deposition
Authors: Ravi Raj, Louis Chiu, Deepak Marla, Aijun Huang
Abstract:
Hybridization of the directed laser deposition (DLD) process using an in-situ micro-roller to impart a vertical compressive load on the deposited bead at elevated temperatures can relieve tensile residual stresses incurred in the process. To investigate this stress relief mechanism and its relationship with the in-situ rolling parameters, a fully coupled dynamic thermo-mechanical model is presented in this study. A single bead deposition of Ti-6Al-4V alloy with an in-situ roller made of mild steel moving at a constant speed with a fixed nominal bead reduction is simulated using the explicit solver of the finite element software, Abaqus. The thermal model includes laser heating during the deposition process and the heat transfer between the roller and the deposited bead. The laser heating is modeled using a moving heat source with a Gaussian distribution, applied along the pre-formed bead’s surface using the VDFLUX Fortran subroutine. The bead’s cross-section is assumed to be semi-elliptical. The interfacial heat transfer between the roller and the bead is considered in the model. Besides, the roller is cooled internally using axial water flow, considered in the model using convective heat transfer. The mechanical model for the bead and substrate includes the effects of rolling along with the deposition process, and their elastoplastic material behavior is captured using the J2 plasticity theory. The model accounts for strain, strain rate, and temperature effects on the yield stress based on Johnson-Cook’s theory. Various aspects of this material behavior are captured in the FE software using the subroutines -VUMAT for elastoplastic behavior, VUHARD for yield stress, and VUEXPAN for thermal strain. The roller is assumed to be elastic and does not undergo any plastic deformation. Also, contact friction at the roller-bead interface is considered in the model. Based on the thermal results of the bead, the distance between the roller and the deposition nozzle (roller o set) can be determined to ensure rolling occurs around the beta-transus temperature for the Ti-6Al-4V alloy. It is identified that roller offset and the nominal bead height reduction are crucial parameters that influence the residual stresses in the hybrid process. The results obtained from a simulation at roller offset of 20 mm and nominal bead height reduction of 7% reveal that the tensile residual stresses decrease to about 52% due to in-situ rolling throughout the deposited bead. This model can be used to optimize the rolling parameters to minimize the residual stresses in the hybrid DLD process with in-situ micro-rolling.Keywords: directed laser deposition, finite element analysis, hybrid in-situ rolling, thermo-mechanical model
Procedia PDF Downloads 1091962 The Moment of the Optimal Average Length of the Multivariate Exponentially Weighted Moving Average Control Chart for Equally Correlated Variables
Authors: Edokpa Idemudia Waziri, Salisu S. Umar
Abstract:
The Hotellng’s T^2 is a well-known statistic for detecting a shift in the mean vector of a multivariate normal distribution. Control charts based on T have been widely used in statistical process control for monitoring a multivariate process. Although it is a powerful tool, the T statistic is deficient when the shift to be detected in the mean vector of a multivariate process is small and consistent. The Multivariate Exponentially Weighted Moving Average (MEWMA) control chart is one of the control statistics used to overcome the drawback of the Hotellng’s T statistic. In this paper, the probability distribution of the Average Run Length (ARL) of the MEWMA control chart when the quality characteristics exhibit substantial cross correlation and when the process is in-control and out-of-control was derived using the Markov Chain algorithm. The derivation of the probability functions and the moments of the run length distribution were also obtained and they were consistent with some existing results for the in-control and out-of-control situation. By simulation process, the procedure identified a class of ARL for the MEWMA control when the process is in-control and out-of-control. From our study, it was observed that the MEWMA scheme is quite adequate for detecting a small shift and a good way to improve the quality of goods and services in a multivariate situation. It was also observed that as the in-control average run length ARL0¬ or the number of variables (p) increases, the optimum value of the ARL0pt increases asymptotically and as the magnitude of the shift σ increases, the optimal ARLopt decreases. Finally, we use the example from the literature to illustrate our method and demonstrate its efficiency.Keywords: average run length, markov chain, multivariate exponentially weighted moving average, optimal smoothing parameter
Procedia PDF Downloads 4221961 A Programming Assessment Software Artefact Enhanced with the Help of Learners
Authors: Romeo A. Botes, Imelda Smit
Abstract:
The demands of an ever changing and complex higher education environment, along with the profile of modern learners challenge current approaches to assessment and feedback. More learners enter the education system every year. The younger generation expects immediate feedback. At the same time, feedback should be meaningful. The assessment of practical activities in programming poses a particular problem, since both lecturers and learners in the information and computer science discipline acknowledge that paper-based assessment for programming subjects lacks meaningful real-life testing. At the same time, feedback lacks promptness, consistency, comprehensiveness and individualisation. Most of these aspects may be addressed by modern, technology-assisted assessment. The focus of this paper is the continuous development of an artefact that is used to assist the lecturer in the assessment and feedback of practical programming activities in a senior database programming class. The artefact was developed using three Design Science Research cycles. The first implementation allowed one programming activity submission per assessment intervention. This pilot provided valuable insight into the obstacles regarding the implementation of this type of assessment tool. A second implementation improved the initial version to allow multiple programming activity submissions per assessment. The focus of this version is on providing scaffold feedback to the learner – allowing improvement with each subsequent submission. It also has a built-in capability to provide the lecturer with information regarding the key problem areas of each assessment intervention.Keywords: programming, computer-aided assessment, technology-assisted assessment, programming assessment software, design science research, mixed-method
Procedia PDF Downloads 2961960 Parameter Identification Analysis in the Design of Rock Fill Dams
Authors: G. Shahzadi, A. Soulaimani
Abstract:
This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS
Procedia PDF Downloads 1461959 Seawater Intrusion in the Coastal Aquifer of Wadi Nador (Algeria)
Authors: Abdelkader Hachemi & Boualem Remini
Abstract:
Seawater intrusion is a significant challenge faced by coastal aquifers in the Mediterranean basin. This study aims to determine the position of the sharp interface between seawater and freshwater in the aquifer of Wadi Nador, located in the Wilaya of Tipaza, Algeria. A numerical areal sharp interface model using the finite element method is developed to investigate the spatial and temporal behavior of seawater intrusion. The aquifer is assumed to be homogeneous and isotropic. The simulation results are compared with geophysical prospection data obtained through electrical methods in 2011 to validate the model. The simulation results demonstrate a good agreement with the geophysical prospection data, confirming the accuracy of the sharp interface model. The position of the sharp interface in the aquifer is found to be approximately 1617 meters from the sea. Two scenarios are proposed to predict the interface position for the year 2024: one without pumping and the other with pumping. The results indicate a noticeable retreat of the sharp interface position in the first scenario, while a slight decline is observed in the second scenario. The findings of this study provide valuable insights into the dynamics of seawater intrusion in the Wadi Nador aquifer. The predicted changes in the sharp interface position highlight the potential impact of pumping activities on the aquifer's vulnerability to seawater intrusion. This study emphasizes the importance of implementing measures to manage and mitigate seawater intrusion in coastal aquifers. The sharp interface model developed in this research can serve as a valuable tool for assessing and monitoring the vulnerability of aquifers to seawater intrusion.Keywords: seawater, intrusion, sharp interface, Algeria
Procedia PDF Downloads 741958 Tracing a Timber Breakthrough: A Qualitative Study of the Introduction of Cross-Laminated-Timber to the Student Housing Market in Norway
Authors: Marius Nygaard, Ona Flindall
Abstract:
The Palisaden student housing project was completed in August 2013 and was, with its eight floors, Norway’s tallest timber building at the time of completion. It was the first time cross-laminated-timber (CLT) was utilized at this scale in Norway. The project was the result of a concerted effort by a newly formed management company to establish CLT as a sustainable and financially competitive alternative to conventional steel and concrete systems. The introduction of CLT onto the student housing market proved so successful that by 2017 more than 4000 individual student residences will have been built using the same model of development and construction. The aim of this paper is to identify the key factors that enabled this breakthrough for CLT. It is based on an in-depth study of a series of housing projects and the role of the management company who both instigated and enabled this shift of CLT from the margin to the mainstream. Specifically, it will look at how a new building system was integrated into a marketing strategy that identified a market potential within the existing structure of the construction industry and within the economic restrictions inherent to student housing in Norway. It will show how a key player established a project model that changed both the patterns of cooperation and the information basis for decisions. Based on qualitative semi-structured interviews with managers, contractors and the interdisciplinary teams of consultants (architects, structural engineers, acoustical experts etc.) this paper will trace the introduction, expansion and evolution of CLT-based building systems in the student housing market. It will show how the project management firm’s position in the value chain enabled them to function both as a liaison between contractor and client, and between contractor and producer. A position that allowed them to improve the flow of information. This ensured that CLT was handled on equal terms to other structural solutions in the project specifications, enabling realistic pricing and risk evaluation. Secondly, this paper will describe and discuss how the project management firm established and interacted with a growing network of contractors, architects and engineers to pool expertise and broaden the knowledge base across Norway’s regional markets. Finally, it will examine the role of the client, the building typology, and the industrial and technological factors in achieving this breakthrough for CLT in the construction industry. This paper gives an in-depth view of the progression of a single case rather than a broad description of the state of the art of large-scale timber building in Norway. However, this type of study may offer insights that are important to the understanding not only of specific markets but also of how new technologies should be introduced in big and well-established industries.Keywords: cross-laminated-timber (CLT), industry breakthrough, student housing, timber market
Procedia PDF Downloads 2231957 Fitness Apparel and Body Cathexis of Women Consumers When and after Using Virtual Fitting Room
Authors: Almas Athif Fathin Wiyantoro, Fransiskus Xaverius Ivan Budiman, Fithra Faisal Hastiadi
Abstract:
The growth of clothing and technology as a marketing tool has a great influence on online business owners to know how much the characteristics and psychology of consumers in influencing purchasing decisions made by Indonesian women consumers. One of the most important issues faced by Indonesian women consumers is the suitability of clothing. The suitability of clothing can affect the body cathexis, identity, and confidence. So the thematic analysis of clothing fitness and body cathexis of women consumers when and after using virtual fitting room technology to purchase decision is important to do. This research using group method of pre-post treatment and considers how the recruitment technique of snowball sampling, which uses interpersonal relations and connections between people, both includes and excludes individuals into 39 participants' social networks to access specific populations. The results obtained from the study that the results of body scans and photos of virtual fitting room results can be made an intervention in women consumers in assessing their body cathexis objectively in the process of making purchasing decisions. The study also obtained a regression equation Y = 0.830 + 0.290X1 + 0.292X2, showing a positive relationship between suitability of clothing and body cathexis which influenced purchasing decisions on women consumers and after (personal and psychological factors) using virtual fitting room, meaning that all independent variables influence Positive towards the purchasing decision of the women consumers.Keywords: body cathexis, clothing fitness, purchasing decision making and virtual fitting room
Procedia PDF Downloads 2131956 Towards a Methodology for the Assessment of Neighbourhood Design for Happiness
Authors: Tina Pujara
Abstract:
Urban and regional research in the new emerging inter-disciplinary field of happiness is seemingly limited. However, it is progressively being recognized that there is enormous potential for social and behavioral scientists to add a spatial dimension to it. In fact, the happiness of communities can be notably influenced by the design and maintenance of the neighborhoods they inhabit. The probable key reasons being that places can facilitate human social connections and relationships. While it is increasingly being acknowledged that some neighborhood designs appear better suited for social connectedness than others, the plausible reasons for places to deter these characteristics and perhaps their influence on happiness are outwardly unknown. In addition, an explicit step wise methodology to assess neighborhood designs for happiness (of their communities) is not known to exist. This paper is an attempt towards developing such a methodological framework. The paper presents the development of a methodological framework for assessing neighborhood designs for happiness, with a particular focus on the outdoor shared spaces in neighborhoods. The developed methodological framework of investigation follows a mixed method approach and draws upon four different sources of information. The framework proposes an empirical examination of the contribution of neighborhood factors, particularly outdoor shared spaces, to individual happiness. One of the main tools proposed for this empirical examination is Jan Gehl’s Public Space Public Life (PSPL) Survey. The developed framework, as presented in the paper, is a contribution towards the development of a consolidated methodology for assessing neighborhood designs for happiness, which can further serve as a unique tool to inform urban designers, architects and other decision makers.Keywords: happiness, methodology, neighbourhood design, outdoor shared spaces
Procedia PDF Downloads 1631955 Comparative Evaluation of EBT3 Film Dosimetry Using Flat Bad Scanner, Densitometer and Spectrophotometer Methods and Its Applications in Radiotherapy
Authors: K. Khaerunnisa, D. Ryangga, S. A. Pawiro
Abstract:
Over the past few decades, film dosimetry has become a tool which is used in various radiotherapy modalities, either for clinical quality assurance (QA) or dose verification. The response of the film to irradiation is usually expressed in optical density (OD) or net optical density (netOD). While the film's response to radiation is not linear, then the use of film as a dosimeter must go through a calibration process. This study aimed to compare the function of the calibration curve of various measurement methods with various densitometer, using a flat bad scanner, point densitometer and spectrophotometer. For every response function, a radichromic film calibration curve is generated from each method by performing accuracy, precision and sensitivity analysis. netOD is obtained by measuring changes in the optical density (OD) of the film before irradiation and after irradiation when using a film scanner if it uses ImageJ to extract the pixel value of the film on the red channel of three channels (RGB), calculate the change in OD before and after irradiation when using a point densitometer, and calculate changes in absorbance before and after irradiation when using a spectrophotometer. the results showed that the three calibration methods gave readings with a netOD precision of doses below 3% for the uncertainty value of 1σ (one sigma). while the sensitivity of all three methods has the same trend in responding to film readings against radiation, it has a different magnitude of sensitivity. while the accuracy of the three methods provides readings below 3% for doses above 100 cGy and 200 cGy, but for doses below 100 cGy found above 3% when using point densitometers and spectrophotometers. when all three methods are used for clinical implementation, the results of the study show accuracy and precision below 2% for the use of scanners and spectrophotometers and above 3% for precision and accuracy when using point densitometers.Keywords: Callibration Methods, Film Dosimetry EBT3, Flat Bad Scanner, Densitomete, Spectrophotometer
Procedia PDF Downloads 1351954 Influence of Glass Plates Different Boundary Conditions on Human Impact Resistance
Authors: Alberto Sanchidrián, José A. Parra, Jesús Alonso, Julián Pecharromán, Antonia Pacios, Consuelo Huerta
Abstract:
Glass is a commonly used material in building; there is not a unique design solution as plates with a different number of layers and interlayers may be used. In most façades, a security glazing have to be used according to its performance in the impact pendulum. The European Standard EN 12600 establishes an impact test procedure for classification under the point of view of the human security, of flat plates with different thickness, using a pendulum of two tires and 50 kg mass that impacts against the plate from different heights. However, this test does not replicate the actual dimensions and border conditions used in building configurations and so the real stress distribution is not determined with this test. The influence of different boundary conditions, as the ones employed in construction sites, is not well taking into account when testing the behaviour of safety glazing and there is not a detailed procedure and criteria to determinate the glass resistance against human impact. To reproduce the actual boundary conditions on site, when needed, the pendulum test is arranged to be used "in situ", with no account for load control, stiffness, and without a standard procedure. Fracture stress of small and large glass plates fit a Weibull distribution with quite a big dispersion so conservative values are adopted for admissible fracture stress under static loads. In fact, test performed for human impact gives a fracture strength two or three times higher, and many times without a total fracture of the glass plate. Newest standards, as for example DIN 18008-4, states for an admissible fracture stress 2.5 times higher than the ones used for static and wing loads. Now two working areas are open: a) to define a standard for the ‘in situ’ test; b) to prepare a laboratory procedure that allows testing with more real stress distribution. To work on both research lines a laboratory that allows to test medium size specimens with different border conditions, has been developed. A special steel frame allows reproducing the stiffness of the glass support substructure, including a rigid condition used as reference. The dynamic behaviour of the glass plate and its support substructure have been characterized with finite elements models updated with modal tests results. In addition, a new portable impact machine is being used to get enough force and direction control during the impact test. Impact based on 100 J is used. To avoid problems with broken glass plates, the test have been done using an aluminium plate of 1000 mm x 700 mm size and 10 mm thickness supported on four sides; three different substructure stiffness conditions are used. A detailed control of the dynamic stiffness and the behaviour of the plate is done with modal tests. Repeatability of the test and reproducibility of results prove that procedure to control both, stiffness of the plate and the impact level, is necessary.Keywords: glass plates, human impact test, modal test, plate boundary conditions
Procedia PDF Downloads 3071953 Modeling of Large Elasto-Plastic Deformations by the Coupled FE-EFGM
Authors: Azher Jameel, Ghulam Ashraf Harmain
Abstract:
In the recent years, the enriched techniques like the extended finite element method, the element free Galerkin method, and the Coupled finite element-element free Galerkin method have found wide application in modeling different types of discontinuities produced by cracks, contact surfaces, and bi-material interfaces. The extended finite element method faces severe mesh distortion issues while modeling large deformation problems. The element free Galerkin method does not have mesh distortion issues, but it is computationally more demanding than the finite element method. The coupled FE-EFGM proves to be an efficient numerical tool for modeling large deformation problems as it exploits the advantages of both FEM and EFGM. The present paper employs the coupled FE-EFGM to model large elastoplastic deformations in bi-material engineering components. The large deformation occurring in the domain has been modeled by using the total Lagrangian approach. The non-linear elastoplastic behavior of the material has been represented by the Ramberg-Osgood model. The elastic predictor-plastic corrector algorithms are used for the evaluation stresses during large deformation. Finally, several numerical problems are solved by the coupled FE-EFGM to illustrate its applicability, efficiency and accuracy in modeling large elastoplastic deformations in bi-material samples. The results obtained by the proposed technique are compared with the results obtained by XFEM and EFGM. A remarkable agreement was observed between the results obtained by the three techniques.Keywords: XFEM, EFGM, coupled FE-EFGM, level sets, large deformation
Procedia PDF Downloads 4471952 Combined Treatment of PARP-1 Inhibitor and Carbon Ion or Gamma Exposure Reduces the Metastatic Potential in Cultured Human Cells
Authors: Priyanka Chowdhury, Asitikantha Sarma, Utpal Ghosh
Abstract:
Hadron therapy using high Linear Energy Transfer (LET) ion beam is producing promising clinical results worldwide. The major advantages are its ability to kill radio-resistant tumor and its anti-metastatic activity. Poly(ADP-ribose) polymerase-1 (PARP-1) inhibitors have been widely used as radiosensitizer, but its role in metastasis is unknown. The purpose of our study was to investigate the effect of PARP-1 depletion in combination with either Carbon Ion Beam (CIB) or gamma irradiation on metastatic potential of cultured cancerous cells. A549 cells were irradiated with CIB (0-4Gy) or gamma (0, 2, 4, 6 and 10 Gy) with and without PARP-1 inhibition. The metastatic potential of the cells was determined by cell migratory assay, expression, and activity of MMP-2 and MMP-9, expression of Cadherin, Fibronectin, and Vimentin. CIB exposure reduced migratory property and activity of MMP-2 and MMP-9 significantly. CIB with PARP-1 inhibition reduced cell migration and Matrix Metalloproteinase (MMPs) activity in a synergistic manner. Expression of MMPs was also down-regulated in CIB and combined treatment. On the contrary, MMP- 2 and MMP-9 activity was significantly increased in gamma irradiated cells but decreased upon combined treatment of gamma and PARP-1 inhibitor. MMPs expression and migration was reduced when gamma irradiation was combined with PARP-1 inhibition. Thus, our study clearly demonstrates that PARP-1 inhibition in combination with either high or low LET can significantly suppress metastatic potential in cancer cells and thereby can be a promising tool in controlling metastatic cancers.Keywords: high LET, low LET, matrix metalloproteinase (MMP), PARP-1
Procedia PDF Downloads 2141951 The Integration of ICT in EFL Classroom and Its Impact on Teacher Development
Authors: Tayaa Karima, Bouaziz Amina
Abstract:
Today's world is knowledge-based; everything we do is somehow connected with technology which it has a remarkable influence on socio-cultural and economic developments, including educational settings. This type of technology is supported in many teaching/learning setting where the medium of instruction is through computer technology, and particularly involving digital technologies. There has been much debate over the use of computers and the internet in foreign language teaching for more than two decades. Various studies highlights that the integration of Information Communications Technology (ICT) in foreign language teaching will have positive effects on both the teachers and students to help them be aware of the modernized world and meet the current demands of the globalised world. Information and communication technology has been gradually integrated in foreign learning environment as a platform for providing learners with learning opportunities. Thus, the impact of ICT on language teaching and learning has been acknowledged globally, this is because of the fundamental role that it plays in the enhancement of teaching and learning quality, modify the pedagogical practice, and motivate learners. Due to ICT related developments, many Maghreb countries regard ICT as a tool for changes and innovations in education. Therefore, the ministry of education attempted to set up computer laboratories and provide internet connection in the schools. Investment in ICT for educational innovations and improvement purposes has been continuing the need of teacher who will employ it in the classroom as vital role of the curriculum. ICT does not have an educational value in itself, but it becomes precious when teachers use it in learning and teaching process. This paper examines the impacts of ICT on teacher development rather than on teaching quality and highlights some challenges facing using ICT in the language learning/teaching.Keywords: information communications technology (ICT), integration, foreign language teaching, teacher development, learning opportunity
Procedia PDF Downloads 3881950 Transformation of Positron Emission Tomography Raw Data into Images for Classification Using Convolutional Neural Network
Authors: Paweł Konieczka, Lech Raczyński, Wojciech Wiślicki, Oleksandr Fedoruk, Konrad Klimaszewski, Przemysław Kopka, Wojciech Krzemień, Roman Shopa, Jakub Baran, Aurélien Coussat, Neha Chug, Catalina Curceanu, Eryk Czerwiński, Meysam Dadgar, Kamil Dulski, Aleksander Gajos, Beatrix C. Hiesmayr, Krzysztof Kacprzak, łukasz Kapłon, Grzegorz Korcyl, Tomasz Kozik, Deepak Kumar, Szymon Niedźwiecki, Dominik Panek, Szymon Parzych, Elena Pérez Del Río, Sushil Sharma, Shivani Shivani, Magdalena Skurzok, Ewa łucja Stępień, Faranak Tayefi, Paweł Moskal
Abstract:
This paper develops the transformation of non-image data into 2-dimensional matrices, as a preparation stage for classification based on convolutional neural networks (CNNs). In positron emission tomography (PET) studies, CNN may be applied directly to the reconstructed distribution of radioactive tracers injected into the patient's body, as a pattern recognition tool. Nonetheless, much PET data still exists in non-image format and this fact opens a question on whether they can be used for training CNN. In this contribution, the main focus of this paper is the problem of processing vectors with a small number of features in comparison to the number of pixels in the output images. The proposed methodology was applied to the classification of PET coincidence events.Keywords: convolutional neural network, kernel principal component analysis, medical imaging, positron emission tomography
Procedia PDF Downloads 1431949 The Evaluation of Gravity Anomalies Based on Global Models by Land Gravity Data
Authors: M. Yilmaz, I. Yilmaz, M. Uysal
Abstract:
The Earth system generates different phenomena that are observable at the surface of the Earth such as mass deformations and displacements leading to plate tectonics, earthquakes, and volcanism. The dynamic processes associated with the interior, surface, and atmosphere of the Earth affect the three pillars of geodesy: shape of the Earth, its gravity field, and its rotation. Geodesy establishes a characteristic structure in order to define, monitor, and predict of the whole Earth system. The traditional and new instruments, observables, and techniques in geodesy are related to the gravity field. Therefore, the geodesy monitors the gravity field and its temporal variability in order to transform the geodetic observations made on the physical surface of the Earth into the geometrical surface in which positions are mathematically defined. In this paper, the main components of the gravity field modeling, (Free-air and Bouguer) gravity anomalies are calculated via recent global models (EGM2008, EIGEN6C4, and GECO) over a selected study area. The model-based gravity anomalies are compared with the corresponding terrestrial gravity data in terms of standard deviation (SD) and root mean square error (RMSE) for determining the best fit global model in the study area at a regional scale in Turkey. The least SD (13.63 mGal) and RMSE (15.71 mGal) were obtained by EGM2008 for the Free-air gravity anomaly residuals. For the Bouguer gravity anomaly residuals, EIGEN6C4 provides the least SD (8.05 mGal) and RMSE (8.12 mGal). The results indicated that EIGEN6C4 can be a useful tool for modeling the gravity field of the Earth over the study area.Keywords: free-air gravity anomaly, Bouguer gravity anomaly, global model, land gravity
Procedia PDF Downloads 1691948 Determination of Fatigue Limit in Post Impacted Carbon Fiber Reinforced Epoxy Polymer (CFRP) Specimens Using Self Heating Methodology
Authors: Deepika Sudevan, Patrick Rozycki, Laurent Gornet
Abstract:
This paper presents the experimental identification of the fatigue limit for pristine and impacted Carbon Fiber Reinforced Epoxy polymer (CFRP) woven composites based on the relatively new self-heating methodology for composites. CFRP composites of [0/90]8 and quasi isotropic configurations prepared using hand-layup technique are subjected to low energy impacts (20 J energy) simulating a barely visible impact damage (BVID). Runway debris strike, tool drop or hailstone impact can cause a BVID on an aircraft fuselage made of carbon composites and hence understanding the post-impact fatigue response of CFRP laminates is of immense importance to the aerospace community. The BVID zone on the specimens is characterized using X-ray Tomography technique. Both pristine and impacted specimens are subjected to several blocks of constant amplitude (CA) fatigue loading keeping R-ratio a constant but with increments in the mean loading stress after each block. The number of loading cycles in each block is a subjective parameter and it varies for pristine and impacted CFRP specimens. To monitor the temperature evolution during fatigue loading, thermocouples are pasted on the CFRP specimens at specific locations. The fatigue limit is determined by two strategies, first is by considering the stabilized temperature in every block and second is by considering the change in the temperature slope per block. The results show that both strategies can be adopted to determine the fatigue limit in both pristine and impacted CFRP composites.Keywords: CFRP, fatigue limit, low energy impact, self-heating, WRM
Procedia PDF Downloads 2321947 Factors Affecting Nutritional Status of Elderly People of Rural Nepal: A Community-Based Cross-Sectional Study
Authors: Man Kumar Tamang, Uday Narayan Yadav
Abstract:
Background and objectives: Every country in the world is facing a demographic challenge due to drastic growth of population over 60 years. Adequate diet and nutritional status are important determinants of health in elderly populations. This study aimed to assess the nutritional status among the elderly population and factors associated with malnutrition at the community setting in rural Nepal. Methods: This is a community-based cross-sectional study among elderly of age 60 years or above in the three randomly selected VDCs of Morang district in eastern Nepal, between August and November, 2016. A multi stage cluster sampling was adopted with sample size of 345 of which 339 participated in the study. Nutritional status was assessed by MNA tool and associated socio-economic, demographic, psychological and nutritional factors were checked by binary logistic regression analysis. Results: Among 339 participants, 24.8% were found to be within normal nutritional status, 49.6% were at risk of malnutrition and 24.8% were malnourished. Independent factors associated with malnutrition status among the elderly people after controlling the cofounders in the bivariate analysis were: elderly who were malnourished were those who belonged to backward caste according to traditional Hindu caste system [OR=2.69, 95% CI: 1.17-6.21), being unemployed (OR=3.23, 95% CI: 1.63-6.41),who experienced any mistreatment from caregivers (OR=4.05, 95% CI: 1.90-8.60), being not involved in physical activity (OR=4.67, 95% CI: 1.87-11.66) and those taking medication for any co-morbidities. Conclusion: Many socio-economic, psychological and physiological factors affect nutritional status in our sample population and these issues need to be addressed for bringing improvement in elderly nutrition and health status.Keywords: elderly, eastern Nepal, malnutrition, nutritional status
Procedia PDF Downloads 2981946 The Study of Customer Satisfaction towards the Services of Baan Bueng Resort in Nongprue Subdistrict, Baanlamung District, Chonburi Province
Authors: Witthaya Mekhum, Jinjutha Srihera
Abstract:
This research aims to study customer satisfaction towards the services of Baan Bueng Resort in Nongprue Subdistrict, Baanlamung District, Chonburi Province. 108 sample were drawn by random sampling from Thai and foreign tourists at Baan Bueng Resort. Questionnaires were distributed. Data were analyzed using frequency, percentage, mean (X) and standard deviation (S.D.). The tool used in this research was questionnaire on satisfaction towards the services of Baan Bueng Resort in Nongprue Subdistrict, Baanlamung District, Chonburi Province. The questionnaire can be divided into 3 parts; i.e. Part 1: General information i.e. gender, age, educational level, occupation, income, and nationality, Part 2: Customer satisfaction towards the services of Baan Bueng Resort; and Part 3: Suggestions of respondents. It can be concluded that most of the respondents are male, aged between 25 – 35 years old with bachelor degree. Most of them are private company employees with income 10,000–20,000 Baht per month. The majority of customers are satisfied with the services at Baan Beung Resort. Overall satisfaction is at good level. Considering each item, the item with the highest satisfaction level is personality and manner of employees and promptness and accuracy of cashier staff. Overall satisfaction towards the cleanliness of the rooms is at very good level. When considering each item, the item with the highest satisfaction level is that the guest room is cleaned everyday, while the satisfaction towards the quality of food and beverages at Baan Bueng Resort in Nongprue Subdistrict, Baanlamung District, Chonburi Province is at very good level. The item with the highest satisfaction is hotel facilities.Keywords: satisfaction study, service, hotel, customer
Procedia PDF Downloads 3311945 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.Keywords: conditional generative adversarial net, market and credit risk management, neural network, time series
Procedia PDF Downloads 1431944 Scentscape of the Soul as a Direct Channel of Communication with the Psyche and Physical Body
Authors: Elena Roadhouse
Abstract:
“When it take the kitchen middens from the latest canning session out to the compost before going to bed, the orchestra is in full chorus. Night vapors and scents from the earth mingle with the fragrance of honeysuckle nearby and basil grown in the compost. They merge into the rhythmic pulse of night”. William Longgood Carl Jung did not specifically recognize scent and olfactory function as a window into the psyche. He did recognize instinct and the natural history of mankind as key to understanding and reconnecting with the Psyche. The progressive path of modern humans has brought incredible scientific and industrial advancements that have changed the human relationship with Mother Earth, the primal wisdom of mankind, and led to the loss of instinct. The olfactory bulbs are an integral part of our ancient brain and has evolved in a way that is proportional to the human separation with the instinctual self. If olfaction is a gateway to our instinct, then it is also a portal to the soul. Natural aromatics are significant and powerful instruments for supporting the mind, our emotional selves, and our bodies. This paper aims to shed light on the important role of scent in the understanding of the existence of the psyche, generational trauma, and archetypal fragrance. Personalized Natural Perfume combined with mindfulness practices can be used as an effective behavioral conditioning tool to promote the healing of transgenerational and individual trauma, the fragmented self, and the physical body.Keywords: scentscape of the soul, psyche, individuation, epigenetics, depth psychology, carl Jung, instinct, trauma, archetypal scent, personal myth, holistic wellness, natural perfumery
Procedia PDF Downloads 1041943 Restored CO₂ from Flue Gas and Utilization by Converting to Methanol by 3 Step Processes: Steam Reforming, Reverse Water Gas Shift and Hydrogenation
Authors: Rujira Jitrwung, Kuntima Krekkeitsakul, Weerawat Patthaveekongka, Chiraphat Kumpidet, Jarukit Tepkeaw, Krissana Jaikengdee, Anantachai Wannajampa
Abstract:
Flue gas discharging from coal fired or gas combustion power plant contains around 12% Carbon dioxide (CO₂), 6% Oxygen (O₂), and 82% Nitrogen (N₂).CO₂ is a greenhouse gas which has been concerned to the global warming. Carbon Capture, Utilization, and Storage (CCUS) is a topic which is a tool to deal with this CO₂ realization. Flue gas is drawn down from the chimney and filtered, then it is compressed to build up the pressure until 8 bar. This compressed flue gas is sent to three stages Pressure Swing Adsorption (PSA), which is filled with activated carbon. Experiments were showed the optimum adsorption pressure at 7bar, which CO₂ can be adsorbed step by step in 1st, 2nd, and 3rd stage, obtaining CO₂ concentration 29.8, 66.4, and 96.7 %, respectively. The mixed gas concentration from the last step is composed of 96.7% CO₂,2.7% N₂, and 0.6%O₂. This mixed CO₂product gas obtained from 3 stages PSA contained high concentration CO₂, which is ready to use for methanol synthesis. The mixed CO₂ was experimented in 5 Liter/Day of methanol synthesis reactor skid by 3 step processes as followed steam reforming, reverse water gas shift, and then hydrogenation. The result showed that proportional of mixed CO₂ and CH₄ 70/30, 50/50, 30/70 % (v/v), and 10/90 yielded methanol 2.4, 4.3, 5.6, and 6.0 Liter/day and save CO₂ 40, 30, 20, and 5 % respectively. The optimum condition resulted both methanol yield and CO₂ consumption using CO₂/CH₄ ratio 43/57 % (v/v), which yielded 4.8 Liter/day methanol and save CO₂ 27% comparing with traditional methanol production from methane steam reforming (5 Liter/day)and absent CO₂ consumption.Keywords: carbon capture utilization and storage, pressure swing adsorption, reforming, reverse water gas shift, methanol
Procedia PDF Downloads 1871942 A Study of Basic and Reactive Dyes Removal from Synthetic and Industrial Wastewater by Electrocoagulation Process
Authors: Almaz Negash, Dessie Tibebe, Marye Mulugeta, Yezbie Kassa
Abstract:
Large-scale textile industries use large amounts of toxic chemicals, which are very hazardous to human health and environmental sustainability. In this study, the removal of various dyes from effluents of textile industries using the electrocoagulation process was investigated. The studied dyes were Reactive Red 120 (RR-120), Basic Blue 3 (BB-3), and Basic Red 46 (BR-46), which were found in samples collected from effluents of three major textile factories in the Amhara region, Ethiopia. For maximum removal, the dye BB-3 required an acidic pH 3, RR120 basic pH 11, while BR-46 neutral pH 7 conditions. BB-3 required a longer treatment time of 80 min than BR46 and RR-120, which required 30 and 40 min, respectively. The best removal efficiency of 99.5%, 93.5%, and 96.3% was achieved for BR-46, BB-3, and RR-120, respectively, from synthetic wastewater containing 10 mg L1of each dye at an applied potential of 10 V. The method was applied to real textile wastewaters and 73.0 to 99.5% removal of the dyes was achieved, Indicating Electrocoagulation can be used as a simple, and reliable method for the treatment of real wastewater from textile industries. It is used as a potentially viable and inexpensive tool for the treatment of textile dyes. Analysis of the electrochemically generated sludge by X-ray Diffraction, Scanning Electron Microscope, and Fourier Transform Infrared Spectroscopy revealed the expected crystalline aluminum oxides (bayerite (Al(OH)3 diaspore (AlO(OH)) found in the sludge. The amorphous phase was also found in the floc. Textile industry owners should be aware of the impact of the discharge of effluents on the Ecosystem and should use the investigated electrocoagulation method for effluent treatment before discharging into the environment.Keywords: electrocoagulation, aluminum electrodes, Basic Blue 3, Basic Red 46, Reactive Red 120, textile industry, wastewater
Procedia PDF Downloads 53