Search results for: diagnostic obesity notation model assessment index
454 Dynamic Features Selection for Heart Disease Classification
Authors: Walid MOUDANI
Abstract:
The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the Coronary Heart Disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts- knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem.Keywords: Multi-Classifier Decisions Tree, Features Reduction, Dynamic Programming, Rough Sets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2537453 A Cross-Disciplinary Educational Model in Biomanufacturing to Sustain a Competitive Workforce Ecosystem
Authors: Rosa Buxeda, Lorenzo Saliceti-Piazza, Rodolfo J. Romañach, Luis Ríos, Sandra L. Maldonado-Ramírez
Abstract:
Biopharmaceuticals manufacturing is one of the major economic activities worldwide. Ninety-three percent of the workforce in a biomanufacturing environment concentrates in production-related areas. As a result, strategic collaborations between industry and academia are crucial to ensure the availability of knowledgeable workforce needed in an economic region to become competitive in biomanufacturing. In the past decade, our institution has been a key strategic partner with multinational biotechnology companies in supplying science and engineering graduates in the field of industrial biotechnology. Initiatives addressing all levels of the educational pipeline, from K-12 to college to continued education for company employees have been established along a ten-year span. The Amgen BioTalents Program was designed to provide undergraduate science and engineering students with training in biomanufacturing. The areas targeted by this educational program enhance their academic development, since these topics are not part of their traditional science and engineering curricula. The educational curriculum involved the process of producing a biomolecule from the genetic engineering of cells to the production of an especially targeted polypeptide, protein expression and purification, to quality control, and validation. This paper will report and describe the implementation details and outcomes of the first sessions of the program.
Keywords: Biomanufacturing curriculum, interdisciplinary learning, workforce development, industry-academia partnering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980452 Low-Cost Mechatronic Design of an Omnidirectional Mobile Robot
Authors: S. Cobos-Guzman
Abstract:
This paper presents the results of a mechatronic design based on a 4-wheel omnidirectional mobile robot that can be used in indoor logistic applications. The low-level control has been selected using two open-source hardware (Raspberry Pi 3 Model B+ and Arduino Mega 2560) that control four industrial motors, four ultrasound sensors, four optical encoders, a vision system of two cameras, and a Hokuyo URG-04LX-UG01 laser scanner. Moreover, the system is powered with a lithium battery that can supply 24 V DC and a maximum current-hour of 20Ah.The Robot Operating System (ROS) has been implemented in the Raspberry Pi and the performance is evaluated with the selection of the sensors and hardware selected. The mechatronic system is evaluated and proposed safe modes of power distribution for controlling all the electronic devices based on different tests. Therefore, based on different performance results, some recommendations are indicated for using the Raspberry Pi and Arduino in terms of power, communication, and distribution of control for different devices. According to these recommendations, the selection of sensors is distributed in both real-time controllers (Arduino and Raspberry Pi). On the other hand, the drivers of the cameras have been implemented in Linux and a python program has been implemented to access the cameras. These cameras will be used for implementing a deep learning algorithm to recognize people and objects. In this way, the level of intelligence can be increased in combination with the maps that can be obtained from the laser scanner.
Keywords: Autonomous, indoor robot, mechatronic, omnidirectional robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 593451 Numerical Studies on Flow Field Characteristics of Cavity Based Scramjet Combustors
Authors: Rakesh Arasu, Sasitharan Ambicapathy, Sivaraj Ponnusamy, Mohanraj Murugesan, V. R. Sanal Kumar
Abstract:
The flow field within the combustor of scramjet engine is very complex and poses a considerable challenge in the design and development of a supersonic combustor with an optimized geometry. In this paper comprehensive numerical studies on flow field characteristics of different cavity based scramjet combustors with transverse injection of hydrogen have been carried out for both non-reacting and reacting flows. The numerical studies have been carried out using a validated 2D unsteady, density based 1st-order implicit k-omega turbulence model with multi-component finite rate reacting species. The results show a wide variety of flow features resulting from the interactions between the injector flows, shock waves, boundary layers, and cavity flows. We conjectured that an optimized cavity is a good choice to stabilize the flame in the hypersonic flow, and it generates a recirculation zone in the scramjet combustor. We comprehended that the cavity based scramjet combustors having a bearing on the source of disturbance for the transverse jet oscillation, fuel/air mixing enhancement, and flameholding improvement. We concluded that cavity shape with backward facing step and 45o forward ramp is a good choice to get higher temperatures at the exit compared to other four models of scramjet combustors considered in this study.
Keywords: Flame holding, Hypersonic flow, Scramjet combustor, Supersonic combustor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3227450 Highly Optimized Novel High Speed Low Power Barrel Shifter at 22nm Hi K Metal Gate Strained Si Technology Node
Authors: Shobha Sharma, Amita Dev
Abstract:
This research paper presents highly optimized barrel shifter at 22nm Hi K metal gate strained Si technology node. This barrel shifter is having a unique combination of static and dynamic body bias which gives lowest power delay product. This power delay product is compared with the same circuit at same technology node with static forward biasing at ‘supply/2’ and also with normal reverse substrate biasing and still found to be the lowest. The power delay product of this barrel sifter is .39362X10-17J and is lowered by approximately 78% to reference proposed barrel shifter at 32nm bulk CMOS technology. Power delay product of barrel shifter at 22nm Hi K Metal gate technology with normal reverse substrate bias is 2.97186933X10-17J and can be compared with this design’s PDP of .39362X10-17J. This design uses both static and dynamic substrate biasing and also has approximately 96% lower power delay product compared to only forward body biased at half of supply voltage. The NMOS model used are predictive technology models of Arizona state university and the simulations to be carried out using HSPICE simulator.Keywords: Dynamic body biasing, highly optimized barrel shifter, PDP, Static body biasing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886449 Fluorescence Quenching as an Efficient Tool for Sensing Application: Study on the Fluorescence Quenching of Naphthalimide Dye by Graphene Oxide
Authors: Sanaz Seraj, Shohre Rouhani
Abstract:
Recently, graphene has gained much attention because of its unique optical, mechanical, electrical, and thermal properties. Graphene has been used as a key material in the technological applications in various areas such as sensors, drug delivery, super capacitors, transparent conductor, and solar cell. It has a superior quenching efficiency for various fluorophores. Based on these unique properties, the optical sensors with graphene materials as the energy acceptors have demonstrated great success in recent years. During quenching, the emission of a fluorophore is perturbed by a quencher which can be a substrate or biomolecule, and due to this phenomenon, fluorophore-quencher has been used for selective detection of target molecules. Among fluorescence dyes, 1,8-naphthalimide is well known for its typical intramolecular charge transfer (ICT) and photo-induced charge transfer (PET) fluorophore, strong absorption and emission in the visible region, high photo stability, and large Stokes shift. Derivatives of 1,8-naphthalimides have found applications in some areas, especially fluorescence sensors. Herein, the fluorescence quenching of graphene oxide has been carried out on a naphthalimide dye as a fluorescent probe model. The quenching ability of graphene oxide on naphthalimide dye was studied by UV-VIS and fluorescence spectroscopy. This study showed that graphene is an efficient quencher for fluorescent dyes. Therefore, it can be used as a suitable candidate sensing platform. To the best of our knowledge, studies on the quenching and absorption of naphthalimide dyes by graphene oxide are rare.
Keywords: Fluorescence, graphene oxide, naphthalimide dye, quenching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 763448 Influence of Loading Pattern and Shaft Rigidity on Laterally Loaded Helical Piles in Cohesionless Soil
Authors: Mohamed Hesham Hamdy Abdelmohsen, Ahmed Shawky Abdul Aziz, Mona Fawzy Al-Daghma
Abstract:
Helical piles are widely used as axially and laterally loaded deep foundations. When they are required to resist bearing combined loads (BCLs), such as axial compression and lateral thrust, different behaviour is expected, necessitating further investigation. The aim of the present article is to clarify the behaviour of a single helical pile of different shaft rigidity embedded in cohesionless soil and subjected to simultaneous or successive loading patterns of BCLs. The study was first developed analytically and extended numerically. The numerical analysis using PLAXIS 3D was further verified through a laboratory experimental programme on a set of helical pile models. The results indicate highly interactive effects of the studied parameters, but it is obviously confirmed that the pile performance increases with both the increase of shaft rigidity and the change of BCLs loading pattern from simultaneous to successive. However, it is noted that the increase of vertical load does not always enhance the lateral capacity but may cause a decrement in lateral capacity, as observed with helical piles of flexible shafts. This study provides insightful information for the design of helical piles in structures loaded by complex sequence of forces, wind turbines, and industrial shafts.
Keywords: Helical pile, lateral loads. combined loads, cohesionless soil, analytical model, PLAXIS 3D.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83447 Identifying Autism Spectrum Disorder Using Optimization-Based Clustering
Authors: Sharifah Mousli, Sona Taheri, Jiayuan He
Abstract:
Autism spectrum disorder (ASD) is a complex developmental condition involving persistent difficulties with social communication, restricted interests, and repetitive behavior. The challenges associated with ASD can interfere with an affected individual’s ability to function in social, academic, and employment settings. Although there is no effective medication known to treat ASD, to our best knowledge, early intervention can significantly improve an affected individual’s overall development. Hence, an accurate diagnosis of ASD at an early phase is essential. The use of machine learning approaches improves and speeds up the diagnosis of ASD. In this paper, we focus on the application of unsupervised clustering methods in ASD, as a large volume of ASD data generated through hospitals, therapy centers, and mobile applications has no pre-existing labels. We conduct a comparative analysis using seven clustering approaches, such as K-means, agglomerative hierarchical, model-based, fuzzy-C-means, affinity propagation, self organizing maps, linear vector quantisation – as well as the recently developed optimization-based clustering (COMSEP-Clust) approach. We evaluate the performances of the clustering methods extensively on real-world ASD datasets encompassing different age groups: toddlers, children, adolescents, and adults. Our experimental results suggest that the COMSEP-Clust approach outperforms the other seven methods in recognizing ASD with well-separated clusters.
Keywords: Autism spectrum disorder, clustering, optimization, unsupervised machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 452446 Implementation of Congestion Management Strategies on Arterial Roads: Case Study of Geelong
Authors: A. Das, L. Hitihamillage, S. Moridpour
Abstract:
Natural disasters are inevitable to the biodiversity. Disasters such as flood, tsunami and tornadoes could be brutal, harsh and devastating. In Australia, flooding is a major issue experienced by different parts of the country. In such crisis, delays in evacuation could decide the life and death of the people living in those regions. Congestion management could become a mammoth task if there are no steps taken before such situations. In the past to manage congestion in such circumstances, many strategies were utilised such as converting the road shoulders to extra lanes or changing the road geometry by adding more lanes. However, expansion of road to resolving congestion problems is not considered a viable option nowadays. The authorities avoid this option due to many reasons, such as lack of financial support and land space. They tend to focus their attention on optimising the current resources they possess and use traffic signals to overcome congestion problems. Traffic Signal Management strategy was considered a viable option, to alleviate congestion problems in the City of Geelong, Victoria. Arterial road with signalised intersections considered in this paper and the traffic data required for modelling collected from VicRoads. Traffic signalling software SIDRA used to model the roads, and the information gathered from VicRoads. In this paper, various signal parameters utilised to assess and improve the corridor performance to achieve the best possible Level of Services (LOS) for the arterial road.
Keywords: Congestion, constraints, management, LOS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 969445 Effect of Infill Walls on Response of Multi Storey Reinforced Concrete Structure
Authors: Ayman Abd-Elhamed, Sayed Mahmoud
Abstract:
The present research work investigates the seismic response of reinforced concrete (RC) frame building considering the effect of modeling masonry infill (MI) walls. The seismic behavior of a residential 6-storey RC frame building, considering and ignoring the effect of masonry, is numerically investigated using response spectrum (RS) analysis. The considered herein building is designed as a moment resisting frame (MRF) system following the Egyptian code (EC) requirements. Two developed models in terms of bare frame and infill walls frame are used in the study. Equivalent diagonal strut methodology is used to represent the behavior of infill walls, whilst the well-known software package ETABS is used for implementing all frame models and performing the analysis. The results of the numerical simulations such as base shear, displacements, and internal forces for the bare frame as well as the infill wall frame are presented in a comparative way. The results of the study indicate that the interaction between infill walls and frames significantly change the responses of buildings during earthquakes compared to the results of bare frame building model. Specifically, the seismic analysis of RC bare frame structure leads to underestimation of base shear and consequently damage or even collapse of buildings may occur under strong shakings. On the other hand, considering infill walls significantly decrease the peak floor displacements and drifts in both X and Y-directions.Keywords: Masonry infill, bare frame, response spectrum, seismic response.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3514444 A Mobile Multihop Relay Dynamic TDD Scheme for Cellular Networks
Authors: Jong-Moon Chung, Hyung-Weon Cho, Ki-Yong Jin, Min-Hee Cho
Abstract:
In this paper, we present an analytical framework for the evaluation of the uplink performance of multihop cellular networks based on dynamic time division duplex (TDD). New wireless broadband protocols, such as WiMAX, WiBro, and 3G-LTE apply TDD, and mobile communication protocols under standardization (e.g., IEEE802.16j) are investigating mobile multihop relay (MMR) as a future technology. In this paper a novel MMR TDD scheme is presented, where the dynamic range of the frame is shared to traffic resources of asymmetric nature and multihop relaying. The mobile communication channel interference model comprises of inner and co-channel interference (CCI). The performance analysis focuses on the uplink due to the fact that the effects of dynamic resource allocation show significant performance degradation only in the uplink compared to time division multiple access (TDMA) schemes due to CCI [1-3], where the downlink results to be the same or better.The analysis was based on the signal to interference power ratio (SIR) outage probability of dynamic TDD (D-TDD) and TDMA systems,which are the most widespread mobile communication multi-user control techniques. This paper presents the uplink SIR outage probability with multihop results and shows that the dynamic TDD scheme applying MMR can provide a performance improvement compared to single hop applications if executed properly.
Keywords: Co-Channel Interference, Dynamic TDD, MobileMultihop Reply, Cellular Network, Time Division Multiple Access.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2347443 Empirical Roughness Progression Models of Heavy Duty Rural Pavements
Authors: Nahla H. Alaswadko, Rayya A. Hassan, Bayar N. Mohammed
Abstract:
Empirical deterministic models have been developed to predict roughness progression of heavy duty spray sealed pavements for a dataset representing rural arterial roads. The dataset provides a good representation of the relevant network and covers a wide range of operating and environmental conditions. A sample with a large size of historical time series data for many pavement sections has been collected and prepared for use in multilevel regression analysis. The modelling parameters include road roughness as performance parameter and traffic loading, time, initial pavement strength, reactivity level of subgrade soil, climate condition, and condition of drainage system as predictor parameters. The purpose of this paper is to report the approaches adopted for models development and validation. The study presents multilevel models that can account for the correlation among time series data of the same section and to capture the effect of unobserved variables. Study results show that the models fit the data very well. The contribution and significance of relevant influencing factors in predicting roughness progression are presented and explained. The paper concludes that the analysis approach used for developing the models confirmed their accuracy and reliability by well-fitting to the validation data.
Keywords: Roughness progression, empirical model, pavement performance, heavy duty pavement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 808442 High School Stem Curriculum and Example of Laboratory Work That Shows How Microcomputers Can Help in Understanding of Physical Concepts
Authors: Jelena Slugan, Ivica Ružić
Abstract:
We are witnessing the rapid development of technologies that change the world around us. However, curriculums and teaching processes are often slow to adapt to the change; it takes time, money and expertise to implement technology in the classroom. Therefore, the University of Split, Croatia, partnered with local school Marko Marulić High School and created the project "Modern competence in modern high schools" as part of which five different curriculums for STEM areas were developed. One of the curriculums involves combining information technology with physics. The main idea was to teach students how to use different circuits and microcomputers to explore nature and physical phenomena. As a result, using electrical circuits, students are able to recreate in the classroom the phenomena that they observe every day in their environment. So far, high school students had very little opportunity to perform experiments independently, and especially, those physics experiment did not involve ICT. Therefore, this project has a great importance, because the students will finally get a chance to develop themselves in accordance to modern technologies. This paper presents some new methods of teaching physics that will help students to develop experimental skills through the study of deterministic nature of physical laws. Students will learn how to formulate hypotheses, model physical problems using the electronic circuits and evaluate their results. While doing that, they will also acquire useful problem solving skills.
Keywords: ICT in physics, curriculum, laboratory activities, STEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 997441 Effect of Crude Oil Particle Elasticity on the Separation Efficiency of a Hydrocyclone
Authors: M. H. Narasingha, K. Pana-Suppamassadu, P. Narataruksa
Abstract:
The separation efficiency of a hydrocyclone has extensively been considered on the rigid particle assumption. A collection of experimental studies have demonstrated their discrepancies from the modeling and simulation results. These discrepancies caused by the actual particle elasticity have generally led to a larger amount of energy consumption in the separation process. In this paper, the influence of particle elasticity on the separation efficiency of a hydrocyclone system was investigated through the Finite Element (FE) simulations using crude oil droplets as the elastic particles. A Reitema-s design hydrocyclone with a diameter of 8 mm was employed to investigate the separation mechanism of the crude oil droplets from water. The cut-size diameter eter of the crude oil was 10 - Ðçm in order to fit with the operating range of the adopted hydrocylone model. Typical parameters influencing the performance of hydrocyclone were varied with the feed pressure in the range of 0.3 - 0.6 MPa and feed concentration between 0.05 – 0.1 w%. In the simulation, the Finite Element scheme was applied to investigate the particle-flow interaction occurred in the crude oil system during the process. The interaction of a single oil droplet at the size of 10 - Ðçm to the flow field was observed. The feed concentration fell in the dilute flow regime so the particle-particle interaction was ignored in the study. The results exhibited the higher power requirement for the separation of the elastic particulate system when compared with the rigid particulate system.Keywords: Hydrocyclone, separation efficiency, strain energy density, strain rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1807440 A Rule-based Approach for Anomaly Detection in Subscriber Usage Pattern
Authors: Rupesh K. Gopal, Saroj K. Meher
Abstract:
In this report we present a rule-based approach to detect anomalous telephone calls. The method described here uses subscriber usage CDR (call detail record) data sampled over two observation periods: study period and test period. The study period contains call records of customers- non-anomalous behaviour. Customers are first grouped according to their similar usage behaviour (like, average number of local calls per week, etc). For customers in each group, we develop a probabilistic model to describe their usage. Next, we use maximum likelihood estimation (MLE) to estimate the parameters of the calling behaviour. Then we determine thresholds by calculating acceptable change within a group. MLE is used on the data in the test period to estimate the parameters of the calling behaviour. These parameters are compared against thresholds. Any deviation beyond the threshold is used to raise an alarm. This method has the advantage of identifying local anomalies as compared to techniques which identify global anomalies. The method is tested for 90 days of study data and 10 days of test data of telecom customers. For medium to large deviations in the data in test window, the method is able to identify 90% of anomalous usage with less than 1% false alarm rate.Keywords: Subscription fraud, fraud detection, anomalydetection, maximum likelihood estimation, rule based systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2816439 Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise
Authors: J. P. Dubois, Omar M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.Keywords: Colour noise, Doppler shift, innovation filter, least square-support vector machine, matched filter, Rayleigh fading, Wiener filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817438 Creating Shared Value: A Paradigm Shift from Corporate Social Responsibility to Creating Shared Value
Authors: Bolanle Deborah Motilewa, E.K. Rowland Worlu, Gbenga Mayowa Agboola, Marvellous Aghogho Chidinma Gberevbie
Abstract:
Businesses operating in the modern business world are faced with varying challenges; amongst which is the need to ensure that they are performing their societal function of being responsible in the society in which they operate. This responsibility to society is generally termed as corporate social responsibility. For many years, the practice of corporate social responsibility (CSR) was solely philanthropic, where organizations gave ‘charity’ or ‘alms’ to society, without any link to the organization’s mission and objectives. However, there has arisen a shift in the application of CSR from an act of philanthropy to a strategy with a business model engaged in by organizations to create a win-win situation of performing their societal obligation, whilst simultaneously performing their economic obligation. In more recent times, the term has moved from CSR to creating shared value, which is simply corporate policies and practices that enhance the competitiveness of a business organization while simultaneously advancing social and economic conditions in the communities in which the company operates. Creating shared value has in more recent light found more meaning in underdeveloped countries, faced with deep societal challenges that businesses can solve whilst creating economic value. This study thus reviews literature on CSR, conceptualizing the shift to creating shared value and finally viewing its potential significance in Africa’s development.Keywords: Corporate social responsibility, shared value, Africapitalism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2515437 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model
Authors: Nicolae Bold, Daniel Nijloveanu
Abstract:
The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.Keywords: Genetic algorithm, chromosomes, genes, cropping, agriculture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1603436 A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform
Authors: Vijaya Prakash.A.M, K.S.Gurumurthy
Abstract:
In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].Keywords: Discrete Cosine Transform (DCT), Inverse DiscreteCosine Transform (IDCT), Joint Photographic Expert Group (JPEG), Low Power Design, Very Large Scale Integration (VLSI) .
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3143435 Analysis of the Internal Mechanical Conditions in the Lower Limb Due to External Loads
Authors: Kent Salomonsson, Xuefang Zhao, Sara Kallin
Abstract:
Human soft tissue is loaded and deformed by any activity, an effect known as a stress-strain relationship, and is often described by a load and tissue elongation curve. Several advances have been made in the fields of biology and mechanics of soft human tissue. However, there is limited information available on in vivo tissue mechanical characteristics and behavior. Confident mechanical properties of human soft tissue cannot be extrapolated from e.g. animal testing. Thus, there is need for non invasive methods to analyze mechanical characteristics of soft human tissue. In the present study, the internal mechanical conditions of the lower limb, which is subject to an external load, is studied by use of the finite element method. A detailed finite element model of the lower limb is made possible by use of MRI scans. Skin, fat, bones, fascia and muscles are represented separately and the material properties for them are obtained from literature. Previous studies have been shown to address macroscopic deformation features, e.g. indentation depth, to a large extent. However, the detail in which the internal anatomical features have been modeled does not reveal the critical internal strains that may induce hypoxia and/or eventual tissue damage. The results of the present study reveals that lumped material models, i.e. averaging of the material properties for the different constituents, does not capture regions of critical strains in contrast to more detailed models.Keywords: FEM, human soft tissue, indentation, properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1289434 Earth Station Neural Network Control Methodology and Simulation
Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah
Abstract:
Renewable energy resources are inexhaustible, clean as compared with conventional resources. Also, it is used to supply regions with no grid, no telephone lines, and often with difficult accessibility by common transport. Satellite earth stations which located in remote areas are the most important application of renewable energy. Neural control is a branch of the general field of intelligent control, which is based on the concept of artificial intelligence. This paper presents the mathematical modeling of satellite earth station power system which is required for simulating the system.Aswan is selected to be the site under consideration because it is a rich region with solar energy. The complete power system is simulated using MATLAB–SIMULINK.An artificial neural network (ANN) based model has been developed for the optimum operation of earth station power system. An ANN is trained using a back propagation with Levenberg–Marquardt algorithm. The best validation performance is obtained for minimum mean square error. The regression between the network output and the corresponding target is equal to 96% which means a high accuracy. Neural network controller architecture gives satisfactory results with small number of neurons, hence better in terms of memory and time are required for NNC implementation. The results indicate that the proposed control unit using ANN can be successfully used for controlling the satellite earth station power system.
Keywords: Satellite, neural network, MATLAB, power system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871433 Experimental Investigation of Heat Transfer on Vertical Two-Phased Closed Thermosyphon
Authors: M. Hadi Kusuma, Nandy Putra, Anhar Riza Antariksawan, Ficky Augusta Imawan
Abstract:
Heat pipe is considered to be applied as a passive system to remove residual heat that generated from reactor core when incident occur or from spent fuel storage pool. The objectives are to characterized the heat transfer phenomena, performance of heat pipe, and as a model for large heat pipe will be applied as passive cooling system on nuclear spent fuel pool storage. In this experimental wickless heat pipe or two-phase closed thermosyphon (TPCT) is used. Variation of heat flux are 611.24 Watt/m2 - 3291.29 Watt/m2. Variation of filling ratio are 45 - 70%. Variation of initial pressure are -62 to -74 cm Hg. Demineralized water is used as working fluid in the TPCT. The results showed that increasing of heat load leads to an increase of evaporation of the working fluid. The optimum filling ratio obtained for 60% of TPCT evaporator volume, and initial pressure variation gave different TPCT wall temperature characteristic. TPCT showed best performance with 60% filling ratio and can be consider to be applied as passive residual heat removal system or passive cooling system on spent fuel storage pool.Keywords: Two-phase closed thermo syphon, heat pipe, passive cooling, spent fuel storage pool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1064432 A Modular On-line Profit Sharing Approach in Multiagent Domains
Authors: Pucheng Zhou, Bingrong Hong
Abstract:
How to coordinate the behaviors of the agents through learning is a challenging problem within multi-agent domains. Because of its complexity, recent work has focused on how coordinated strategies can be learned. Here we are interested in using reinforcement learning techniques to learn the coordinated actions of a group of agents, without requiring explicit communication among them. However, traditional reinforcement learning methods are based on the assumption that the environment can be modeled as Markov Decision Process, which usually cannot be satisfied when multiple agents coexist in the same environment. Moreover, to effectively coordinate each agent-s behavior so as to achieve the goal, it-s necessary to augment the state of each agent with the information about other existing agents. Whereas, as the number of agents in a multiagent environment increases, the state space of each agent grows exponentially, which will cause the combinational explosion problem. Profit sharing is one of the reinforcement learning methods that allow agents to learn effective behaviors from their experiences even within non-Markovian environments. In this paper, to remedy the drawback of the original profit sharing approach that needs much memory to store each state-action pair during the learning process, we firstly address a kind of on-line rational profit sharing algorithm. Then, we integrate the advantages of modular learning architecture with on-line rational profit sharing algorithm, and propose a new modular reinforcement learning model. The effectiveness of the technique is demonstrated using the pursuit problem.Keywords: Multi-agent learning; reinforcement learning; rationalprofit sharing; modular architecture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451431 The Effect of Kaizen Implementation on Employees’ Affective Attitude in Textile Company in Ethiopia
Authors: Meseret Teshome
Abstract:
This study has the objective of assessing the effect of kaizen (5S, Muda elimination and Quality Control Circle (QCC) on employees’ affective attitude (job satisfaction, commitment and job stress) in Kombolcha Textile Share Company. A conceptual model was developed to describe the relationship between Kaizen and Employees’ Affective Attitude (EAA) factors. The three factors of Employee Affective Attitude were measured using questionnaire derived from other validated questionnaire. In the data collection to conduct this study; questionnaire, unstructured interview, written documents and direct observations are used. To analyze the data, SPSS and Microsoft Excel were used. In addition, the internal consistency of similar items in the questionnaire instrument was measured for their equivalence by using the cronbach’s alpha test. In this study, the effect of 5S, Muda elimination and QCC on job satisfaction, commitment and job stress in Kombolcha Textile Share Company is assessed and factors that reduce employees’ job satisfaction with respect to kaizen implementation are identified. The total averages of means from the questionnaire are 3.1 for job satisfaction, 4.31 for job commitment and 4.2 for job stress. And results from interview and secondary data show that kaizen implementation have effect on EAA. In general, based on the thesis results it was concluded that kaizen (5S, muda elimination and QCC) have positive effect for improving EAA factors at KTSC. Finally, recommendations for improvement are given based on the results.
Keywords: Kaizen, job satisfaction, job commitment, job stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1276430 Piping Fragility Composed of Different Materials by Using OpenSees Software
Authors: Woo Young Jung, Min Ho Kwon, Bu Seog Ju
Abstract:
A failure of the non-structural component can cause significant damages in critical facilities such as nuclear power plants and hospitals. Historically, it was reported that the damage from the leakage of sprinkler systems, resulted in the shutdown of hospitals for several weeks by the 1971 San Fernando and 1994 North Ridge earthquakes. In most cases, water leakages were observed at the cross joints, sprinkler heads, and T-joint connections in piping systems during and after the seismic events. Hence, the primary objective of this study was to understand the seismic performance of T-joint connections and to develop an analytical Finite Element (FE) model for the T-joint systems of 2-inch fire protection piping system in hospitals subjected to seismic ground motions. In order to evaluate the FE models of the piping systems using OpenSees, two types of materials were used: 1) Steel02 materials and 2) Pinching4 materials. Results of the current study revealed that the nonlinear moment-rotation FE models for the threaded T-joint reconciled well with the experimental results in both FE material models. However, the system-level fragility determined from multiple nonlinear time history analyses at the threaded T-joint was slightly different. The system-level fragility at the T-joint, determined by Pinching4 material was more conservative than that of using Steel02 material in the piping system.
Keywords: Fragility, T-joint, Piping, Leakage, Sprinkler.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2899429 A Coupled Extended-Finite-Discrete Element Method: On the Different Contact Schemes between Continua and Discontinua
Authors: Shervin Khazaeli, Shahab Haj-zamani
Abstract:
Recently, advanced geotechnical engineering problems related to soil movement, particle loss, and modeling of local failure (i.e. discontinua) as well as modeling the in-contact structures (i.e. continua) are of the great interest among researchers. The aim of this research is to meet the requirements with respect to the modeling of the above-mentioned two different domains simultaneously. To this end, a coupled numerical method is introduced based on Discrete Element Method (DEM) and eXtended-Finite Element Method (X-FEM). In the coupled procedure, DEM is employed to capture the interactions and relative movements of soil particles as discontinua, while X-FEM is utilized to model in-contact structures as continua, which may consist of different types of discontinuities. For verification purposes, the new coupled approach is utilized to examine benchmark problems including different contacts between/within continua and discontinua. Results are validated by comparison with those of existing analytical and numerical solutions. This study proves that extended-finite-discrete element method can be used to robustly analyze not only contact problems, but also other types of discontinuities in continua such as (i) crack formations and propagations, (ii) voids and bimaterial interfaces, and (iii) combination of previous cases. In essence, the proposed method can be used vastly in advanced soil-structure interaction problems to investigate the micro and macro behaviour of the surrounding soil and the response of the embedded structure that contains discontinuities.Keywords: Contact problems, discrete element method, extended-finite element method, soil-structure interaction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1238428 The Potential Effect of Biochar Application on Microbial Activities and Availability of Mineral Nitrogen in Arable Soil Stressed by Drought
Authors: Helena Dvořáčková, Jakub Elbl, Irina Mikajlo, Antonín Kintl, Jaroslav Hynšt, Olga Urbánková, Jaroslav Záhora
Abstract:
Application of biochar to arable soils represents a new approach to restore soil health and quality. Many studies reported the positive effect of biochar application on soil fertility and development of soil microbial community. Moreover biochar may affect the soil water retention, but this effect has not been sufficiently described yet. Therefore this study deals with the influence of biochar application on: microbial activities in soil, availability of mineral nitrogen in soil for microorganisms, mineral nitrogen retention and plant production. To demonstrate the effect of biochar addition on the above parameters, the pot experiment was realized. As a model crop, Lactuca sativa L. was used and cultivated from December 10th 2014 till March 22th 2015 in climate chamber in thoroughly homogenized arable soil with and without addition of biochar. Five variants of experiment (V1 – V5) with different regime of irrigation were prepared. Variants V1 – V2 were fertilized by mineral nitrogen, V3 – V4 by biochar and V5 was a control. The significant differences were found only in plant production and mineral nitrogen retention. The highest content of mineral nitrogen in soil was detected in V1 and V2, about 250 % in comparison with the other variants. The positive effect of biochar application on soil fertility, mineral nitrogen availability was not found. On the other hand results of plant production indicate the possible positive effect of biochar application on soil water retention.Keywords: Arable soil, biochar, drought, mineral Nitrogen.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2055427 Geochemistry of Tektites from Maoming of Guandong Province, China
Authors: Yung-Tan Lee, Ren-Yi Huang, Jyh-Yi Shih, Meng-Lung Lin, Yen-Tsui Hu, Hsiao-Ling Yu, Chih-Cheng Chen
Abstract:
We measured the major and trace element contents and Rb-Sr isotopic compositions of 12 tektites from the Maoming area, Guandong province (south China). All the samples studied are splash-form tektites which show pitted or grooved surfaces with schlieren structures on some surfaces. The trace element ratios Ba/Rb (avg. 4.33), Th/Sm (avg. 2.31), Sm/Sc (avg. 0.44), Th/Sc (avg. 1.01) , La/Sc (avg. 2.86), Th/U (avg. 7.47), Zr/Hf (avg. 46.01) and the rare earth elements (REE) contents of tektites of this study are similar to the average upper continental crust. From the chemical composition, it is suggested that tektites in this study are derived from similar parental terrestrial sedimentary deposit which may be related to post-Archean upper crustal rocks. The tektites from the Maoming area have high positive εSr(0) values-ranging from 176.9~190.5 which indicate that the parental material for these tektites have similar Sr isotopic compositions to old terrestrial sedimentary rocks and they were not dominantly derived from recent young sediments (such as soil or loess). The Sr isotopic data obtained by the present study support the conclusion proposed by Blum et al. (1992)[1] that the depositional age of sedimentary target materials is close to 170Ma (Jurassic). Mixing calculations based on the model proposed by Ho and Chen (1996)[2] for various amounts and combinations of target rocks indicate that the best fit for tektites from the Maoming area is a mixture of 40% shale, 30% greywacke, 30% quartzite.Keywords: Geochemistry, Guandong province, South China, Tektites
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020426 Evaluation of Hancornia speciosa Gomes Lyophilization at Different Stages of Maturation
Authors: D. C. Soares, J. T. S. Santos, D. G. Costa, A. K. S. Abud, T. P. Nunes, A. V. D. Figueiredo, A. M. de Oliveira Junior
Abstract:
Mangabeira (Hancornia speciosa Gomes), a native plant in Brazil, is found growing spontaneously in various regions of the country. The high perishability of tropical fruits such as mangaba, causes it to be necessary to use technologies that promote conservation, aiming to increase the shelf life of this fruit and add value. The objective of this study was to compare the mangabas lyophilization curves behaviors with different sizes and maturation stages. The fruits were freeze-dried for a period of approximately 45 hours at lyophilizer Liotop brand, model L -108. It has been considered large the fruits between 38 and 58 mm diameter and small, between 23 and 28 mm diameter and the two states of maturation, intermediate and mature. Large size mangabas drying curves in both states of maturation were linear behavior at all process, while the kinetic drying curves related to small fruits, independent of maturation state, had a typical behavior of drying, with all the well-defined steps. With these results it was noted that the time of lyophilization was suitable for small mangabas, a fact that did not happen with the larger one. This may indicate that the large mangabas require a longer time to freeze until reaches the equilibrium level, as it happens with the small fruits, going to have constant moisture at the end of the process. For both types of fruit were analyzed water activity, acidity, protein, lipid, and vitamin C before and after the process.
Keywords: Freeze dryer, mangaba, conservation, chemical characteristics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2130425 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique
Authors: C. Manjula, Lilly Florence
Abstract:
Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.
Keywords: Decision tree, genetic algorithm, machine learning, software defect prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469