Search results for: structural element
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6715

Search results for: structural element

1255 Evaluation of Arsenic Removal in Soils Contaminated by the Phytoremediation Technique

Authors: V. Ibujes, A. Guevara, P. Barreto

Abstract:

Concentration of arsenic represents a serious threat to human health. It is a bioaccumulable toxic element and is transferred through the food chain. In Ecuador, values of 0.0423 mg/kg As are registered in potatoes of the skirts of the Tungurahua volcano. The increase of arsenic contamination in Ecuador is mainly due to mining activity, since the process of gold extraction generates toxic tailings with mercury. In the Province of Azuay, due to the mining activity, the soil reaches concentrations of 2,500 to 6,420 mg/kg As whereas in the province of Tungurahua it can be found arsenic concentrations of 6.9 to 198.7 mg/kg due to volcanic eruptions. Since the contamination by arsenic, the present investigation is directed to the remediation of the soils in the provinces of Azuay and Tungurahua by phytoremediation technique and the definition of a methodology of extraction by means of analysis of arsenic in the system soil-plant. The methodology consists in selection of two types of plants that have the best arsenic removal capacity in synthetic solutions 60 μM As, a lower percentage of mortality and hydroponics resistance. The arsenic concentrations in each plant were obtained from taking 10 ml aliquots and the subsequent analysis of the ICP-OES (inductively coupled plasma-optical emission spectrometry) equipment. Soils were contaminated with synthetic solutions of arsenic with the capillarity method to achieve arsenic concentration of 13 and 15 mg/kg. Subsequently, two types of plants were evaluated to reduce the concentration of arsenic in soils for 7 weeks. The global variance for soil types was obtained with the InfoStat program. To measure the changes in arsenic concentration in the soil-plant system, the Rhizo and Wenzel arsenic extraction methodology was used and subsequently analyzed with the ICP-OES (optima 8000 Pekin Elmer). As a result, the selected plants were bluegrass and llanten, due to the high percentages of arsenic removal of 55% and 67% and low mortality rates of 9% and 8% respectively. In conclusion, Azuay soil with an initial concentration of 13 mg/kg As reached the concentrations of 11.49 and 11.04 mg/kg As for bluegrass and llanten respectively, and for the initial concentration of 15 mg/kg As reached 11.79 and 11.10 mg/kg As for blue grass and llanten after 7 weeks. For the Tungurahua soil with an initial concentration of 13 mg/kg As it reached the concentrations of 11.56 and 12.16 mg/kg As for the bluegrass and llanten respectively, and for the initial concentration of 15 mg/kg As reached 11.97 and 12.27 mg/kg Ace for bluegrass and llanten after 7 weeks. The best arsenic extraction methodology of soil-plant system is Wenzel.

Keywords: blue grass, llanten, phytoremediation, soil of Azuay, soil of Tungurahua, synthetic arsenic solution

Procedia PDF Downloads 84
1254 Nanobiosensor System for Aptamer Based Pathogen Detection in Environmental Waters

Authors: Nimet Yildirim Tirgil, Ahmed Busnaina, April Z. Gu

Abstract:

Environmental waters are monitored worldwide to protect people from infectious diseases primarily caused by enteric pathogens. All long, Escherichia coli (E. coli) is a good indicator for potential enteric pathogens in waters. Thus, a rapid and simple detection method for E. coli is very important to predict the pathogen contamination. In this study, to the best of our knowledge, as the first time we developed a rapid, direct and reusable SWCNTs (single walled carbon nanotubes) based biosensor system for sensitive and selective E. coli detection in water samples. We use a novel and newly developed flexible biosensor device which was fabricated by high-rate nanoscale offset printing process using directed assembly and transfer of SWCNTs. By simple directed assembly and non-covalent functionalization, aptamer (biorecognition element that specifically distinguish the E. coli O157:H7 strain from other pathogens) based SWCNTs biosensor system was designed and was further evaluated for environmental applications with simple and cost-effective steps. The two gold electrode terminals and SWCNTs-bridge between them allow continuous resistance response monitoring for the E. coli detection. The detection procedure is based on competitive mode detection. A known concentration of aptamer and E. coli cells were mixed and after a certain time filtered. The rest of free aptamers injected to the system. With hybridization of the free aptamers and their SWCNTs surface immobilized probe DNA (complementary-DNA for E. coli aptamer), we can monitor the resistance difference which is proportional to the amount of the E. coli. Thus, we can detect the E. coli without injecting it directly onto the sensing surface, and we could protect the electrode surface from the aggregation of target bacteria or other pollutants that may come from real wastewater samples. After optimization experiments, the linear detection range was determined from 2 cfu/ml to 10⁵ cfu/ml with higher than 0.98 R² value. The system was regenerated successfully with 5 % SDS solution over 100 times without any significant deterioration of the sensor performance. The developed system had high specificity towards E. coli (less than 20 % signal with other pathogens), and it could be applied to real water samples with 86 to 101 % recovery and 3 to 18 % cv values (n=3).

Keywords: aptamer, E. coli, environmental detection, nanobiosensor, SWCTs

Procedia PDF Downloads 170
1253 Electrospun Conducting Polymer/Graphene Composite Nanofibers for Gas Sensing Applications

Authors: Aliaa M. S. Salem, Soliman I. El-Hout, Amira Gaber, Hassan Nageh

Abstract:

Nowadays, the development of poisonous gas detectors is considered to be an urgent matter to secure human health and the environment from poisonous gases, in view of the fact that even a minimal amount of poisonous gas can be fatal. Of these concerns, various inorganic or organic sensing materials have been used. Among these are conducting polymers, have been used as the active material in the gassensorsdue to their low-cost,easy-controllable molding, good electrochemical properties including facile fabrication process, inherent physical properties, biocompatibility, and optical properties. Moreover, conducting polymer-based chemical sensors have an amazing advantage compared to the conventional one as structural diversity, facile functionalization, room temperature operation, and easy fabrication. However, the low selectivity and conductivity of conducting polymers motivated the doping of it with varied materials, especially graphene, to enhance the gas-sensing performance under ambient conditions. There were a number of approaches proposed for producing polymer/ graphene nanocomposites, including template-free self-assembly, hard physical template-guided synthesis, chemical, electrochemical, and electrospinning...etc. In this work, we aim to prepare a novel gas sensordepending on Electrospun nanofibers of conducting polymer/RGO composite that is the effective and efficient expectation of poisonous gases like ammonia, in different application areas such as environmental gas analysis, chemical-,automotive- and medical industries. Moreover, our ultimate objective is to maximize the sensing performance of the prepared sensor and to check its recovery properties.

Keywords: electro spinning process, conducting polymer, polyaniline, polypyrrole, polythiophene, graphene oxide, reduced graphene oxide, functionalized reduced graphene oxide, spin coating technique, gas sensors

Procedia PDF Downloads 161
1252 Total Life Cycle Cost and Life Cycle Assessment of Mass Timber Buildings in the US

Authors: Hongmei Gu, Shaobo Liang, Richard Bergman

Abstract:

With current worldwide trend in designs to have net-zero emission buildings to mitigate climate change, widespread use of mass timber products, such as Cross Laminated Timber (CLT), or Nail Laminated Timber (NLT) or Dowel Laminated Timber (DLT) in buildings have been proposed as one approach in reducing Greenhouse Gas (GHG) emissions. Consequentially, mass timber building designs are being adopted more and more by architectures in North America, especially for mid- to high-rise buildings where concrete and steel buildings are currently prevalent, but traditional light-frame wood buildings are not. Wood buildings and their associated wood products have tended to have lower environmental impacts than competing energy-intensive materials. It is common practice to conduct life cycle assessments (LCAs) and life cycle cost analyses on buildings with traditional structural materials like concrete and steel in the building design process. Mass timber buildings with lower environmental impacts, especially GHG emissions, can contribute to the Net Zero-emission goal for the world-building sector. However, the economic impacts from CLT mass timber buildings still vary from the life-cycle cost perspective and environmental trade-offs associated with GHG emissions. This paper quantified the Total Life Cycle Cost and cradle-to-grave GHG emissions of a pre-designed CLT mass timber building and compared it to a functionally-equivalent concrete building. The Total life cycle Eco-cost-efficiency is defined in this study and calculated to discuss the trade-offs for the net-zero emission buildings in a holistic view for both environmental and economic impacts. Mass timber used in buildings for the United States is targeted to the materials from the nation’s sustainable managed forest in order to benefit both national and global environments and economies.

Keywords: GHG, economic impact, eco-cost-efficiency, total life-cycle costs

Procedia PDF Downloads 114
1251 Seismicity and Ground Response Analysis for MP Tourism Office in Indore, India

Authors: Deepshikha Shukla, C. H. Solanki, Mayank Desai

Abstract:

In the last few years, it has been observed that earthquake is proving a threat to the scientist across the world. With a large number of earthquakes occurring in day to day life, the threat to life and property has increased manifolds which call for an urgent attention of all the researchers globally to carry out the research in the field of Earthquake Engineering. Any hazard related to the earthquake and seismicity is considered to be seismic hazards. The common forms of seismic hazards are Ground Shaking, Structure Damage, Structural Hazards, Liquefaction, Landslides, Tsunami to name a few. Among all the natural hazards, the most devastating and damaging is the earthquake as all other hazards are triggered only after the occurrence of an earthquake. In order to quantify and estimate the seismicity and seismic hazards, many methods and approaches have been proposed in the past few years. Such approaches are Mathematical, Conventional and Computational. Convex Set Theory, Empirical Green’s Function are some of the Mathematical Approaches whereas the Deterministic and Probabilistic Approaches are the Conventional Approach for the estimation of the seismic Hazards. Ground response and Ground Shaking of a particular area or region plays an important role in the damage caused due to the earthquake. In this paper, seismic study using Deterministic Approach and 1 D Ground Response Analysis has been carried out for Madhya Pradesh Tourism Office in Indore Region in Madhya Pradesh in Central India. Indore lies in the seismic zone III (IS: 1893, 2002) in the Seismic Zoning map of India. There are various faults and lineament in this area and Narmada Some Fault and Gavilgadh fault are the active sources of earthquake in the study area. Deepsoil v6.1.7 has been used to perform the 1 D Linear Ground Response Analysis for the study area. The Peak Ground Acceleration (PGA) of the city ranges from 0.1g to 0.56g.

Keywords: seismicity, seismic hazards, deterministic, probabilistic methods, ground response analysis

Procedia PDF Downloads 142
1250 Utilizing Fiber-Based Modeling to Explore the Presence of a Soft Storey in Masonry-Infilled Reinforced Concrete Structures

Authors: Akram Khelaifia, Salah Guettala, Nesreddine Djafar Henni, Rachid Chebili

Abstract:

Recent seismic events have underscored the significant influence of masonry infill walls on the resilience of structures. The irregular positioning of these walls exacerbates their adverse effects, resulting in substantial material and human losses. Research and post-earthquake evaluations emphasize the necessity of considering infill walls in both the design and assessment phases. This study delves into the presence of soft stories in reinforced concrete structures with infill walls. Employing an approximate method relying on pushover analysis results, fiber-section-based macro-modeling is utilized to simulate the behavior of infill walls. The findings shed light on the presence of soft first stories, revealing a notable 240% enhancement in resistance for weak column—strong beam-designed frames due to infill walls. Conversely, the effect is more moderate at 38% for strong column—weak beam-designed frames. Interestingly, the uniform distribution of infill walls throughout the structure's height does not influence soft-story emergence in the same seismic zone, irrespective of column-beam strength. In regions with low seismic intensity, infill walls dissipate energy, resulting in consistent seismic behavior regardless of column configuration. Despite column strength, structures with open-ground stories remain vulnerable to soft first-story emergence, underscoring the crucial role of infill walls in reinforced concrete structural design.

Keywords: masonry infill walls, soft Storey, pushover analysis, fiber section, macro-modeling

Procedia PDF Downloads 39
1249 Evolution of DNA-Binding With-One-Finger Transcriptional Factor Family in Diploid Cotton Gossypium raimondii

Authors: Waqas Shafqat Chattha, Muhammad Iqbal, Amir Shakeel

Abstract:

Transcriptional factors are proteins that play a vital role in regulating the transcription of target genes in different biological processes and are being widely studied in different plant species. In the current era of genomics, plant genomes sequencing has directed to the genome-wide identification, analyses and categorization of diverse transcription factor families and hence provide key insights into their structural as well as functional diversity. The DNA-binding with One Finger (DOF) proteins belongs to C2-C2-type zinc finger protein family. DOF proteins are plant-specific transcription factors implicated in diverse functions including seed maturation and germination, phytohormone signalling, light-mediated gene regulation, cotton-fiber elongation and responses of the plant to biotic as well as abiotic stresses. In this context, a genome-wide in-silico analysis of DOF TF family in diploid cotton species i.e. Gossypium raimondii has enabled us to identify 55 non-redundant genes encoding DOF proteins renamed as GrDofs (Gossypium raimondii Dof). Gene distribution studies have shown that all of the GrDof genes are unevenly distributed across 12 out of 13 G. raimondii chromosomes. The gene structure analysis illustrated that 34 out of 55 GrDof genes are intron-less while remaining 21 genes have a single intron. Protein sequence-based phylogenetic analysis of putative 55 GrDOFs has divided these proteins into 5 major groups with various paralogous gene pairs. Molecular evolutionary studies aided with the conserved domain as well as gene structure analysis suggested that segmental duplications were the principal contributors for the expansion of Dof genes in G. raimondii.

Keywords: diploid cotton , G. raimondii, phylogenetic analysis, transcription factor

Procedia PDF Downloads 125
1248 Childhood Warscape, Experiences from Children of War Offer Key Design Decisions for Safer Built Environments

Authors: Soleen Karim, Meira Yasin, Rezhin Qader

Abstract:

Children’s books present a colorful life for kids around the world, their current environment or what they could potentially have- a home, two loving parents, a playground, and a safe school within a short walk or bus ride. These images are only pages in a donated book for children displaced by war. The environment they live in is significantly different. Displaced children are faced with a temporary life style filled with fear and uncertainty. Children of war associate various structural institutions with a trauma and cannot enter the space, even if it is for their own future development, such as a school. This paper is a collaborative effort with students of the Kennesaw State University architecture department, architectural designers and a mental health professional to address and link the design challenges and the psychological trauma for children of war. The research process consists of a) interviews with former refugees, b) interviews with current refugee children, c) personal understanding of space through one’s own childhood, d) literature review of tested design methods to address various traumas. Conclusion: In addressing the built environment for children of war, it is necessary to address mental health and well being through the creation of space that is sensitive to the needs of children. This is achieved by understanding critical design cues to evoke normalcy and safe space through program organization, color, and symbiosis of synthetic and natural environments. By involving the children suffering from trauma in the design process, aspects of the design are directly enhanced to serve the occupant. Neglecting to involve the participants creates a nonlinear design outcome and does not serve the needs of the occupant to afford them equal opportunity learning and growth experience as other children around the world.

Keywords: activist architecture, childhood education, childhood psychology, adverse childhood experiences

Procedia PDF Downloads 120
1247 Aerothermal Analysis of the Brazilian 14-X Hypersonic Aerospace Vehicle at Mach Number 7

Authors: Felipe J. Costa, João F. A. Martos, Ronaldo L. Cardoso, Israel S. Rêgo, Marco A. S. Minucci, Antonio C. Oliveira, Paulo G. P. Toro

Abstract:

The Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics, at the Institute for Advanced Studies designed the Brazilian 14-X Hypersonic Aerospace Vehicle, which is a technological demonstrator endowed with two innovative technologies: waverider technology, to obtain lift from conical shockwave during the hypersonic flight; and uses hypersonic airbreathing propulsion system called scramjet that is based on supersonic combustion, to perform flights on Earth's atmosphere at 30 km altitude at Mach numbers 7 and 10. The scramjet is an aeronautical engine without moving parts that promote compression and deceleration of freestream atmospheric air at the inlet through the conical/oblique shockwaves generated during the hypersonic flight. During high speed flight, the shock waves and the viscous forces yield the phenomenon called aerodynamic heating, where this physical meaning is the friction between the fluid filaments and the body or compression at the stagnation regions of the leading edge that converts the kinetic energy into heat within a thin layer of air which blankets the body. The temperature of this layer increases with the square of the speed. This high temperature is concentrated in the boundary-layer, where heat will flow readily from the boundary-layer to the hypersonic aerospace vehicle structure. Fay and Riddell and Eckert methods are applied to the stagnation point and to the flat plate segments in order to calculate the aerodynamic heating. On the understanding of the aerodynamic heating it is important to analyze the heat conduction transfer to the 14-X waverider internal structure. ANSYS Workbench software provides the Thermal Numerical Analysis, using Finite Element Method of the 14-X waverider unpowered scramjet at 30 km altitude at Mach number 7 and 10 in terms of temperature and heat flux. Finally, it is possible to verify if the internal temperature complies with the requirements for embedded systems, and, if is necessary to do modifications on the structure in terms of wall thickness and materials.

Keywords: aerodynamic heating, hypersonic, scramjet, thermal analysis

Procedia PDF Downloads 424
1246 Women’s Colours in Digital Innovation

Authors: Daniel J. Patricio Jiménez

Abstract:

Digital reality demands new ways of thinking, flexibility in learning, acquisition of new competencies, visualizing reality under new approaches, generating open spaces, understanding dimensions in continuous change, etc. We need inclusive growth, where colors are not lacking, where lights do not give a distorted reality, where science is not half-truth. In carrying out this study, the documentary or bibliographic collection has been taken into account, providing a reflective and analytical analysis of current reality. In this context, deductive and inductive methods have been used on different multidisciplinary information sources. Women today and tomorrow are a strategic element in science and arts, which, under the umbrella of sustainability, implies ‘meeting current needs without detriment to future generations’. We must build new scenarios, which qualify ‘the feminine and the masculine’ as an inseparable whole, encouraging cooperative behavior; nothing is exclusive or excluding, and that is where true respect for diversity must be based. We are all part of an ecosystem, which we will make better as long as there is a real balance in terms of gender. It is the time of ‘the lifting of the veil’, in other words, it is the time to discover the pseudonyms, the women who painted, wrote, investigated, recorded advances, etc. However, the current reality demands much more; we must remove doors where they are not needed. Mass processing of data, big data, needs to incorporate algorithms under the perspective of ‘the feminine’. However, most STEM students (science, technology, engineering, and math) are men. Our way of doing science is biased, focused on honors and short-term results to the detriment of sustainability. Historically, the canons of beauty, the way of looking, of perceiving, of feeling, depended on the circumstances and interests of each moment, and women had no voice in this. Parallel to science, there is an under-representation of women in the arts, but not so much in the universities, but when we look at galleries, museums, art dealers, etc., colours impoverish the gaze and once again highlight the gender gap and the silence of the feminine. Art registers sensations by divining the future, science will turn them into reality. The uniqueness of the so-called new normality requires women to be protagonists both in new forms of emotion and thought, and in the experimentation and development of new models. This will result in women playing a decisive role in the so-called "5.0 society" or, in other words, in a more sustainable, more humane world.

Keywords: art, digitalization, gender, science

Procedia PDF Downloads 148
1245 Sensitivity Analysis of Prestressed Post-Tensioned I-Girder and Deck System

Authors: Tahsin A. H. Nishat, Raquib Ahsan

Abstract:

Sensitivity analysis of design parameters of the optimization procedure can become a significant factor while designing any structural system. The objectives of the study are to analyze the sensitivity of deck slab thickness parameter obtained from both the conventional and optimum design methodology of pre-stressed post-tensioned I-girder and deck system and to compare the relative significance of slab thickness. For analysis on conventional method, the values of 14 design parameters obtained by the conventional iterative method of design of a real-life I-girder bridge project have been considered. On the other side for analysis on optimization method, cost optimization of this system has been done using global optimization methodology 'Evolutionary Operation (EVOP)'. The problem, by which optimum values of 14 design parameters have been obtained, contains 14 explicit constraints and 46 implicit constraints. For both types of design parameters, sensitivity analysis has been conducted on deck slab thickness parameter which can become too sensitive for the obtained optimum solution. Deviations of slab thickness on both the upper and lower side of its optimum value have been considered reflecting its realistic possible ranges of variations during construction. In this procedure, the remaining parameters have been kept unchanged. For small deviations from the optimum value, compliance with the explicit and implicit constraints has been examined. Variations in the cost have also been estimated. It is obtained that without violating any constraint deck slab thickness obtained by the conventional method can be increased up to 25 mm whereas slab thickness obtained by cost optimization can be increased only up to 0.3 mm. The obtained result suggests that slab thickness becomes less sensitive in case of conventional method of design. Therefore, for realistic design purpose sensitivity should be conducted for any of the design procedure of girder and deck system.

Keywords: sensitivity analysis, optimum design, evolutionary operations, PC I-girder, deck system

Procedia PDF Downloads 111
1244 The Effects of Shift Work on Neurobehavioral Performance: A Meta Analysis

Authors: Thomas Vlasak, Tanja Dujlociv, Alfred Barth

Abstract:

Shift work is an essential element of modern labor, ensuring ideal conditions of service for today’s economy and society. Despite the beneficial properties, its impact on the neurobehavioral performance of exposed subjects remains controversial. This meta-analysis aims to provide first summarizing the effects regarding the association between shift work exposure and different cognitive functions. A literature search was performed via the databases PubMed, PsyINFO, PsyARTICLES, MedLine, PsycNET and Scopus including eligible studies until December 2020 that compared shift workers with non-shift workers regarding neurobehavioral performance tests. A random-effects model was carried out using Hedge’s g as a meta-analytical effect size with a restricted likelihood estimator to summarize the mean differences between the exposure group and controls. The heterogeneity of effect sizes was addressed by a sensitivity analysis using funnel plots, egger’s tests, p-curve analysis, meta-regressions, and subgroup analysis. The meta-analysis included 18 studies resulting in a total sample of 18,802 participants and 37 effect sizes concerning six different neurobehavioral outcomes. The results showed significantly worse performance in shift workers compared to non-shift workers in the following cognitive functions with g (95% CI): processing speed 0.16 (0.02 - 0.30), working memory 0.28 (0.51 - 0.50), psychomotor vigilance 0.21 (0.05 - 0.37), cognitive control 0.86 (0.45 - 1.27) and visual attention 0.19 (0.11 - 0.26). Neither significant moderating effects of publication year or study quality nor significant subgroup differences regarding type of shift or type of profession were indicated for the cognitive outcomes. These are the first meta-analytical findings that associate shift work with decreased cognitive performance in processing speed, working memory, psychomotor vigilance, cognitive control, and visual attention. Further studies should focus on a more homogenous measurement of cognitive functions, a precise assessment of experience of shift work and occupation types which are underrepresented in the current literature (e.g., law enforcement). In occupations where shift work is fundamental (e.g., healthcare, industries, law enforcement), protective countermeasures should be promoted for workers.

Keywords: meta-analysis, neurobehavioral performance, occupational psychology, shift work

Procedia PDF Downloads 91
1243 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling

Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed

Abstract:

The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.

Keywords: streamflow, neural network, optimisation, algorithm

Procedia PDF Downloads 133
1242 Stability Analysis of Green Coffee Export Markets of Ethiopia: Markov-Chain Analysis

Authors: Gabriel Woldu, Maria Sassi

Abstract:

Coffee performs a pivotal role in Ethiopia's GDP, revenue, employment, domestic demand, and export earnings. Ethiopia's coffee production and exports show high variability in the amount of production and export earnings. Despite being the continent's fifth-largest coffee producer, Ethiopia has not developed its ability to shine as a major exporter in the globe's green coffee exports. Ethiopian coffee exports were not stable and had high volume and earnings fluctuations. The main aim of this study was to analyze the dynamics of the export of coffee variation to different importing nations using a first-order Markov Chain model. 14 years of time-series data has been used to examine the direction and structural change in the export of coffee. A compound annual growth rate (CAGR) was used to determine the annual growth rate in the coffee export quantity, value, and per-unit price over the study period. The major export markets for Ethiopian coffee were Germany, Japan, and the USA, which were more stable, while countries such as France, Italy, Belgium, and Saudi Arabia were less stable and had low retention rates for Ethiopian coffee. The study, therefore, recommends that Ethiopia should again revitalize its market to France, Italy, Belgium, and Saudi Arabia, as these countries are the major coffee-consuming countries in the world to boost its export stake to the global coffee markets in the future. In order to further enhance export stability, the Ethiopian Government and other stakeholders in the coffee sector should have to work on reducing the volatility of coffee output and exports in order to improve production and quality efficiency, so that stabilize markets as well as to make the product attractive and price competitive in the importing countries.

Keywords: coffee, CAGR, Markov chain, direction of trade, Ethiopia

Procedia PDF Downloads 116
1241 Thermal Evaluation of Printed Circuit Board Design Options and Voids in Solder Interface by a Simulation Tool

Authors: B. Arzhanov, A. Correia, P. Delgado, J. Meireles

Abstract:

Quad Flat No-Lead (QFN) packages have become very popular for turners, converters and audio amplifiers, among others applications, needing efficient power dissipation in small footprints. Since semiconductor junction temperature (TJ) is a critical parameter in the product quality. And to ensure that die temperature does not exceed the maximum allowable TJ, a thermal analysis conducted in an earlier development phase is essential to avoid repeated re-designs process with huge losses in cost and time. A simulation tool capable to estimate die temperature of components with QFN package was developed. Allow establish a non-empirical way to define an acceptance criterion for amount of voids in solder interface between its exposed pad and Printed Circuit Board (PCB) to be applied during industrialization process, and evaluate the impact of PCB designs parameters. Targeting PCB layout designer as an end user for the application, a user-friendly interface (GUI) was implemented allowing user to introduce design parameters in a convenient and secure way and hiding all the complexity of finite element simulation process. This cost effective tool turns transparent a simulating process and provides useful outputs after acceptable time, which can be adopted by PCB designers, preventing potential risks during the design stage and make product economically efficient by not oversizing it. This article gathers relevant information related to the design and implementation of the developed tool, presenting a parametric study conducted with it. The simulation tool was experimentally validated using a Thermal-Test-Chip (TTC) in a QFN open-cavity, in order to measure junction temperature (TJ) directly on the die under controlled and knowing conditions. Providing a short overview about standard thermal solutions and impacts in exposed pad packages (i.e. QFN), accurately describe the methods and techniques that the system designer should use to achieve optimum thermal performance, and demonstrate the effect of system-level constraints on the thermal performance of the design.

Keywords: QFN packages, exposed pads, junction temperature, thermal management and measurements

Procedia PDF Downloads 235
1240 Exploring the Relationships between Job Satisfaction, Work Engagement, and Loyalty of Academic Staff

Authors: Iveta Ludviga, Agita Kalvina

Abstract:

This paper aims to link together the concepts of job satisfaction, work engagement, trust, job meaningfulness and loyalty to the organisation focusing on specific type of employment–academic jobs. The research investigates the relationships between job satisfaction, work engagement and loyalty as well as the impact of trust and job meaningfulness on the work engagement and loyalty. The survey was conducted in one of the largest Latvian higher education institutions and the sample was drawn from academic staff (n=326). Structured questionnaire with 44 reflective type questions was developed to measure toe constructs. Data was analysed using SPSS and Smart-PLS software. Variance based structural equation modelling (PLS-SEM) technique was used to test the model and to predict the most important factors relevant to employee engagement and loyalty. The first order model included two endogenous constructs (loyalty and intention to stay and recommend, and employee engagement), as well as six exogenous constructs (feeling of fair treatment and trust in management; career growth opportunities; compensation, pay and benefits; management; colleagues; teamwork; and finally job meaningfulness). Job satisfaction was developed as second order construct and both: first and second order models were designed for data analysis. It was found that academics are more engaged than satisfied with their work and main reason for that was found to be job meaningfulness, which is significant predictor for work engagement, but not for job satisfaction. Compensation is not significantly related to work engagement, but only to job satisfaction. Trust was not significantly related neither to engagement, nor to satisfaction, however, it appeared to be significant predictor of loyalty and intentions to stay with the University. This paper revealed academic jobs as specific kind of employment where employees can be more engaged than satisfied and highlighted the specific role of job meaningfulness in the University settings.

Keywords: job satisfaction, job meaningfulness, higher education, work engagement

Procedia PDF Downloads 233
1239 Adding Business Value in Enterprise Applications through Quality Matrices Using Agile

Authors: Afshan Saad, Muhammad Saad, Shah Muhammad Emaduddin

Abstract:

Nowadays the business condition is so quick paced that enhancing ourselves consistently has turned into a huge factor for the presence of an undertaking. We can check this for structural building and significantly more so in the quick-paced universe of data innovation and programming designing. The lithe philosophies, similar to Scrum, have a devoted advance in the process that objectives the enhancement of the improvement procedure and programming items. Pivotal to process enhancement is to pick up data that grants you to assess the condition of the procedure and its items. From the status data, you can design activities for the upgrade and furthermore assess the accomplishment of those activities. This investigation builds a model that measures the product nature of the improvement procedure. The product quality is dependent on the useful and auxiliary nature of the product items, besides the nature of the advancement procedure is likewise vital to enhance programming quality. Utilitarian quality covers the adherence to client prerequisites, while the auxiliary quality tends to the structure of the product item's source code with reference to its practicality. The procedure quality is identified with the consistency and expectedness of the improvement procedure. The product quality model is connected in a business setting by social occasion the information for the product measurements in the model. To assess the product quality model, we investigate the information and present it to the general population engaged with the light-footed programming improvement process. The outcomes from the application and the client input recommend that the model empowers a reasonable evaluation of the product quality and that it very well may be utilized to help the persistent enhancement of the advancement procedure and programming items.

Keywords: Agile SDLC Tools, Agile Software development, business value, enterprise applications, IBM, IBM Rational Team Concert, RTC, software quality, software metrics

Procedia PDF Downloads 150
1238 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 108
1237 Photocatalytic Active Surface of LWSCC Architectural Concretes

Authors: P. Novosad, L. Osuska, M. Tazky, T. Tazky

Abstract:

Current trends in the building industry are oriented towards the reduction of maintenance costs and the ecological benefits of buildings or building materials. Surface treatment of building materials with photocatalytic active titanium dioxide added into concrete can offer a good solution in this context. Architectural concrete has one disadvantage – dust and fouling keep settling on its surface, diminishing its aesthetic value and increasing maintenance e costs. Concrete surface – silicate material with open porosity – fulfils the conditions of effective photocatalysis, in particular, the self-cleaning properties of surfaces. This modern material is advantageous in particular for direct finishing and architectural concrete applications. If photoactive titanium dioxide is part of the top layers of road concrete on busy roads and the facades of the buildings surrounding these roads, exhaust fumes can be degraded with the aid of sunshine; hence, environmental load will decrease. It is clear that options for removing pollutants like nitrogen oxides (NOx) must be found. Not only do these gases present a health risk, they also cause the degradation of the surfaces of concrete structures. The photocatalytic properties of titanium dioxide can in the long term contribute to the enhanced appearance of surface layers and eliminate harmful pollutants dispersed in the air, and facilitate the conversion of pollutants into less toxic forms (e.g., NOx to HNO3). This paper describes verification of the photocatalytic properties of titanium dioxide and presents the results of mechanical and physical tests on samples of architectural lightweight self-compacting concretes (LWSCC). The very essence of the use of LWSCC is their rheological ability to seep into otherwise extremely hard accessible or inaccessible construction areas, or sections thereof where concrete compacting will be a problem, or where vibration is completely excluded. They are also able to create a solid monolithic element with a large variety of shapes; the concrete will at the same meet the requirements of both chemical aggression and the influences of the surrounding environment. Due to their viscosity, LWSCCs are able to imprint the formwork elements into their structure and thus create high quality lightweight architectural concretes.

Keywords: photocatalytic concretes, titanium dioxide, architectural concretes, Lightweight Self-Compacting Concretes (LWSCC)

Procedia PDF Downloads 275
1236 Self-Energy Sufficiency Assessment of the Biorefinery Annexed to a Typical South African Sugar Mill

Authors: M. Ali Mandegari, S. Farzad, , J. F. Görgens

Abstract:

Sugar is one of the main agricultural industries in South Africa and approximately livelihoods of one million South Africans are indirectly dependent on sugar industry which is economically struggling with some problems and should re-invent in order to ensure a long-term sustainability. Second generation biorefinery is defined as a process to use waste fibrous for the production of biofuel, chemicals animal food, and electricity. Bioethanol is by far the most widely used biofuel for transportation worldwide and many challenges in front of bioethanol production were solved. Biorefinery annexed to the existing sugar mill for production of bioethanol and electricity is proposed to sugar industry and is addressed in this study. Since flowsheet development is the key element of the bioethanol process, in this work, a biorefinery (bioethanol and electricity production) annexed to a typical South African sugar mill considering 65ton/h dry sugarcane bagasse and tops/trash as feedstock was simulated. Aspen PlusTM V8.6 was applied as simulator and realistic simulation development approach was followed to reflect the practical behaviour of the plant. Latest results of other researches considering pretreatment, hydrolysis, fermentation, enzyme production, bioethanol production and other supplementary units such as evaporation, water treatment, boiler, and steam/electricity generation units were adopted to establish a comprehensive biorefinery simulation. Steam explosion with SO2 was selected for pretreatment due to minimum inhibitor production and simultaneous saccharification and fermentation (SSF) configuration was adopted for enzymatic hydrolysis and fermentation of cellulose and hydrolyze. Bioethanol purification was simulated by two distillation columns with side stream and fuel grade bioethanol (99.5%) was achieved using molecular sieve in order to minimize the capital and operating costs. Also boiler and steam/power generation were completed using industrial design data. Results indicates that the annexed biorefinery can be self-energy sufficient when 35% of feedstock (tops/trash) bypass the biorefinery process and directly be loaded to the boiler to produce sufficient steam and power for sugar mill and biorefinery plant.

Keywords: biorefinery, self-energy sufficiency, tops/trash, bioethanol, electricity

Procedia PDF Downloads 519
1235 The Impact of Neighbourhood Built-Environment on the Formulation and Facilitation of Bottom-up Mutual Help Networks for Senior Residents in Singapore

Authors: Wei Zhang, Chye Kiang Heng, John Chye Fung

Abstract:

Background: The world’s demographics is currently undergoing the largest wave of both rapid ageing and dramatic urbanisation in human history. As one of the most rapidly ageing countries, Singapore will see about one in four residents aged 65 years and above by 2030 in its high-rise and high-density urban environment. Research questions: To support urban seniors ageing in place and interdependence among senior residents and their informal caregivers, this study argues a community-based care model with bottom-up mutual help networks and asks how neighbourhood built-environment influences the formulation and facilitation of bottom-up mutual help networks in Singapore. Methods: Two public housing communities with different physical environment and rich age-friendly neighbourhood initiatives were chosen as the case studies. The categories, participants and places of bottom-up mutual help activities will be obtained via field observation, non-structural interviews of participants, service providers and managers of care facilities, and documents. Mapping and content analysis will be used to explore the influences of neighbourhood built-environment on the formulation and facilitation of bottom-up mutual help networks. Results and conclusions: The results showed that neighbourhood design, place programming, and place governance have a confluence on the bottom-up mutual help networks for senior residents. Significance: The outcomes of this study will provide fresh evidence for paradigm shifts of community-based care for the elderly and neighbourhood planning. In addition, the research findings will shed light on meaningful implications of urban planners and policy makers as they tackle with the issues arising from the ageing society.

Keywords: Built environment, Mutual help, Neighbourhood, Senior residents, Singapore

Procedia PDF Downloads 115
1234 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm

Authors: Annalakshmi G., Sakthivel Murugan S.

Abstract:

This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.

Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization

Procedia PDF Downloads 142
1233 Comparative Proteomic Analysis of Rice bri1 Mutant Leaves at Jointing-Booting Stage

Authors: Jiang Xu, Daoping Wang, Yinghong Pan

Abstract:

The jointing-booting stage is a critical period of both vegetative growth and reproductive growth in rice. Therefore, the proteomic analysis of the mutant Osbri1, whose corresponding gene OsBRI1 encodes the putative BRs receptor OsBRI1, at jointing-booting stage is very important for understanding the effects of BRs on vegetative and reproductive growth. In this study, the proteomes of leaves from an allelic mutant of the DWARF 61 (D61, OsBRI1) gene, Fn189 (dwarf54, d54) and its wild-type variety T65 (Taichung 65) at jointing-booting stage were analysed by using a Q Exactive plus orbitrap mass spectrometer, and more than 3,100 proteins were identified in each sample. Ontology analysis showed that these proteins distribute in various space of the cells, such as the chloroplast, mitochondrion, and nucleus, they functioned as structural components and/or catalytic enzymes and involved in many physiological processes. Moreover, quantitative analysis displayed that 266 proteins were differentially expressed in two samples, among them, 77 proteins decreased and 189 increased more than two times in Fn189 compared with T65, the proteins whose content decreased in Fn189 including b5-like Heme/Steroid binding domain containing protein, putative retrotransposon protein, putative glutaminyl-tRNA synthetase, and higher content proteins such as mTERF, putative Oligopeptidase homologue, zinc knuckle protein, and so on. A former study founded that the transcription level of a mTERF was up-regulated in the leaves of maize seedling after EBR treatment. In our experiments, it was interesting that one mTERF protein increased, but another mTERF decreased in leaves of Fn189 at jointing-booting stage, which suggested that BRs may have differential regulation mechanisms on the expression of various mTERF proteins. The relationship between other differential proteins with BRs is still unclear, and the effects of BRs on rice protein contents and its regulation mechanisms still need further research.

Keywords: bri1 mutant, jointing-booting stage, proteomic analysis, rice

Procedia PDF Downloads 223
1232 Tax System Reform in Nepal: Analysis of Contemporary Issues, Challenges, and Ways Forward

Authors: Dilliram Paudyal

Abstract:

The history of taxation in Nepal dates back to antiquity. However, the modern tax system gained its momentum after the establishment of democracy in 1951, which initially focused only land tax and tariff on foreign trade. In the due time, several taxes were introduced, such as direct taxes, indirect taxes, and non-taxes. However, the tax structure in Nepal is heavily dominated by indirect taxes that contribute more than 60 % of the total revenue. The government has been mobilizing revenues through a series of tax reforms during the Tenth Five-year Plan (2002 – 2007) and successive Three-year Interim Development Plans by introducing several tax measures. However, these reforms are regressive in nature, which does not lead the overall economy towards short-run stability as well as in the long run development. Based on the literature review and discussion among government officials and few taxpayers individually and groups, this paper aims to major issues and challenges that hinder the tax reform effective in Nepal. Additionally, this paper identifies potential way and process of tax reform in Nepal. The results of the study indicate that transparency in a major problem in Nepalese tax system in Nepal, where serious structural constraints with administrative and procedural complexities envisaged in the Income Tax Act and taxpayers are often unaware of the specific size of tax which is to comply them. Some other issues include high tax rate, limited tax base, leakages in tax collection, rigid and complex Income Tax Act, inefficient and corrupt tax administration, limited potentialities of direct taxes and negative responsiveness of land tax with higher administrative costs. In the context, modality of tax structure and mobilize additional resources is to be rectified on a greater quantum by establishing an effective, dynamic and highly power driven Autonomous Revenue Board.

Keywords: corrupt, development, inefficient, taxation

Procedia PDF Downloads 158
1231 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 159
1230 Structural Analysis of Polymer Thin Films at Single Macromolecule Level

Authors: Hiroyuki Aoki, Toru Asada, Tomomi Tanii

Abstract:

The properties of a spin-cast film of a polymer material are different from those in the bulk material because the polymer chains are frozen in an un-equilibrium state due to the rapid evaporation of the solvent. However, there has been little information on the un-equilibrated conformation and dynamics in a spin-cast film at the single chain level. The real-space observation of individual chains would provide direct information to discuss the morphology and dynamics of single polymer chains. The recent development of super-resolution fluorescence microscopy methods allows the conformational analysis of single polymer chain. In the current study, the conformation of a polymer chain in a spin-cast film by the super-resolution microscopy. Poly(methyl methacrylate) (PMMA) with the molecular weight of 2.2 x 10^6 was spin-cast onto a glass substrate from toluene and chloroform. For the super-resolution fluorescence imaging, a small amount of the PMMA labeled by rhodamine spiroamide dye was added. The radius of gyration (Rg) was evaluated from the super-resolution fluorescence image of each PMMA chain. The mean-square-root of Rg was 48.7 and 54.0 nm in the spin-cast films prepared from the toluene and chloroform solutions, respectively. On the other hand, the chain dimension in a bulk state (a thermally annealed 10- μm-thick sample) was observed to be 43.1 nm. This indicates that the PMMA chain in the spin-cast film takes an expanded conformation compared to the unperturbed chain and that the chain dimension is dependent on the solvent quality. In a good solvent, the PMMA chain has an expanded conformation by the excluded volume effect. The polymer chain is frozen before the relaxation from an un-equilibrated expanded conformation to an unperturbed one by the rapid solvent evaporation.

Keywords: chain conformation, polymer thin film, spin-coating, super-resolution optical microscopy

Procedia PDF Downloads 262
1229 In-situ Phytoremediation Of Polluted Soils By Micropollutants From Artisanal Gold Mining Processes In Burkina Faso

Authors: Yamma Rose, Kone Martine, Yonli Arsène, Wanko Ngnien Adrien

Abstract:

Artisanal gold mining has seen a resurgence in recent years in Burkina Faso with its corollary of soil and water pollution. Indeed, in addition to visible impacts, it generates discharges rich in trace metal elements and acids. This pollution has significant environmental consequences, making these lands unusable while the population depends on the natural environment for its survival. The goal of this study is to assess the decontamination potential of Chrysopogon zizanioides on two artisanal gold processing sites in Burkina Faso. The cyanidation sites of Nebia (1Ha) and Nimbrogo (2Ha) located respectively in the Central West and Central South regions were selected. The soils were characterized to determine the initial pollution levels before the implementation of phytoremediation. After development of the site, parallel trenches equidistant 6 m apart, 30 cm deep, 40 cm wide and opposite to the water flow direction were dug and filled with earth amended with manure. The Chrysopogon zizanioides plants were transplanted 5 cm equidistant into the trenches. The mere fact that Chrysopogon zizanioides grew in the polluted soil is an indication that this plant tolerates and resists the toxicity of trace elements present on the site. The characterization shows sites very polluted with free cyanide 900 times higher than the national standard, the level of Hg in the soil is 5 times more than the limit value, iron and Zn are respectively 1000 times and 200 more than the tolerated environmental value. At time T1 (6 months) and T2 (12 months) of culture, Chrysopogon zizanioides showed less development on the Nimbrogo site than that of the Nebia site. Plant shoots and associated soil samples were collected and analyzed for total As, Hg, Fe and Zn concentration. The trace element content of the soil, the bioaccumulation factor and the hyper accumulation thresholds were also determined to assess the remediation potential. The concentration of As and Hg in the soil was below international risk thresholds, while that of Fe and Zn was well above these thresholds. The CN removal efficiency at the Nebia site is respectively 29.90% and 68.62% compared to 6.6% and 60.8% at Nimbrogo at time T1 and T2.

Keywords: chrysopogon zizanioides, in-situ phytoremediation, polluted soils, micropollutants

Procedia PDF Downloads 53
1228 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification

Authors: Oumaima Khlifati, Khadija Baba

Abstract:

Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.

Keywords: distress pavement, hyperparameters, automatic classification, deep learning

Procedia PDF Downloads 62
1227 Studying the Effect of Different Sizes of Carbon Fiber on Locally Developed Copper Based Composites

Authors: Tahir Ahmad, Abubaker Khan, Muhammad Kamran, Muhammad Umer Manzoor, Muhammad Taqi Zahid Butt

Abstract:

Metal Matrix Composites (MMC) is a class of weight efficient structural materials that are becoming popular in engineering applications especially in electronic, aerospace, aircraft, packaging and various other industries. This study focuses on the development of carbon fiber reinforced copper matrix composite. Keeping in view the vast applications of metal matrix composites,this specific material is produced for its unique mechanical and thermal properties i.e. high thermal conductivity and low coefficient of thermal expansion at elevated temperatures. The carbon fibers were not pretreated but coated with copper by electroless plating in order to increase the wettability of carbon fiber with the copper matrix. Casting is chosen as the manufacturing route for the C-Cu composite. Four different compositions of the composite were developed by varying the amount of carbon fibers by 0.5, 1, 1.5 and 2 wt. % of the copper. The effect of varying carbon fiber content and sizes on the mechanical properties of the C-Cu composite is studied in this work. The tensile test was performed on the tensile specimens. The yield strength decreases with increasing fiber content while the ultimate tensile strength increases with increasing fiber content. Rockwell hardness test was also performed and the result followed the increasing trend for increasing carbon fibers and the hardness numbers are 30.2, 37.2, 39.9 and 42.5 for sample 1, 2, 3 and 4 respectively. The microstructures of the specimens were also examined under the optical microscope. Wear test and SEM also done for checking characteristic of C-Cu marix composite. Through casting may be a route for the production of the C-Cu matrix composite but still powder metallurgy is better to follow as the wettability of carbon fiber with matrix, in that case, would be better.

Keywords: copper based composites, mechanical properties, wear properties, microstructure

Procedia PDF Downloads 343
1226 Mechanical and Microstructural Study of Photo-Aged Low Density Polyethylene (LDPE) Films

Authors: Meryem Imane Babaghayou, Abdelhafidi Asma

Abstract:

This study deals with the ageing of Blown extruded films of low-density polyethylene (LDPE), used for greenhouse covering. The LDPE have been subjected to climatic ageing in a sub-Saharan facility at Laghouat (Algeria) with direct exposure to sun. The microstructural changes in the films were analyzed by IRFT for different states of ageing. The mechanical characterization was performed on a uniaxial tensile apparatus. The mechanical properties such as Young's modulus, strain at break, and stress at break have been followed for different states of exposure time (0 to 6 months). The Climatic ageing of LDPE films shows the effect of ageing on the microstructural Plan which leads to: i) To an oxidation of the molecular chains. ii) To the formation of cross-linkings and breaking chains, which both of them are responsible for the mechanical behavior’s modifications of the material. Cross-links are in favor of strengthening of the mechanical properties at break (the increase of σr and εr). In other side, the chains breaking leads to a decrease of these properties. The increase in the Young's modulus also seems to be related to those structural changes since the cross-links increase the average molecular weight. Branchings and tangles are favorable pairs for the ductile nature of the material. And in other side, the chains breaking reduces the average molecular weight and therefore promotes the stiffening (following to morphological changes) so the material becomes fragile. The post-mortem analysis of the samples shows that the mechanical stress has an effect on the molecular structure of the material. Although if quantitatively the concentrations of different chemical species exchanges, from a quantitative point of view only the unsaturations raises the polemics of a possible microstructural modification induced by mechanical stress applied during the tensile test. Also, we recommend a more rigorous analysis with other means of investigation.

Keywords: low-density polyethylene, ageing, mechanical properties, IRTF

Procedia PDF Downloads 342