Search results for: Minimum stopping distance.
199 An Investigation of Surface Texturing by Ultrasonic Impingement of Micro-Particles
Authors: Nagalingam Arun Prasanth, Ahmed Syed Adnan, S. H. Yeo
Abstract:
Surface topography plays a significant role in the functional performance of engineered parts. It is important to have a control on the surface geometry and understanding on the surface details to get the desired performance. Hence, in the current research contribution, a non-contact micro-texturing technique has been explored and developed. The technique involves ultrasonic excitation of a tool as a prime source of surface texturing for aluminum alloy workpieces. The specimen surface is polished first and is then immersed in a liquid bath containing 10% weight concentration of Ti6Al4V grade 5 spherical powders. A submerged slurry jet is used to recirculate the spherical powders under the ultrasonic horn which is excited at an ultrasonic frequency and amplitude of 40 kHz and 70 µm respectively. The distance between the horn and workpiece surface was remained fixed at 200 µm using a precision control stage. Texturing effects were investigated for different process timings of 1, 3 and 5 s. Thereafter, the specimens were cleaned in an ultrasonic bath for 5 mins to remove loose debris on the surface. The developed surfaces are characterized by optical and contact surface profiler. The optical microscopic images show a texture of circular spots on the workpiece surface indented by titanium spherical balls. Waviness patterns obtained from contact surface profiler supports the texturing effect produced from the proposed technique. Furthermore, water droplet tests were performed to show the efficacy of the proposed technique to develop hydrophilic surfaces and to quantify the texturing effect produced.
Keywords: Surface texturing, surface modification, topography, ultrasonic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 966198 Advantages of Large Strands in Precast/Prestressed Concrete Highway Application
Authors: Amin Akhnoukh
Abstract:
The objective of this research is to investigate the advantages of using large-diameter 0.7 inch prestressing strands in pretention applications. The advantages of large-diameter strands are mainly beneficial in the heavy construction applications. Bridges and tunnels are subjected to a higher daily traffic with an exponential increase in trucks ultimate weight, which raise the demand for higher structural capacity of bridges and tunnels. In this research, precast prestressed I-girders were considered as a case study. Flexure capacities of girders fabricated using 0.7 inch strands and different concrete strengths were calculated and compared to capacities of 0.6 inch strands girders fabricated using equivalent concrete strength. The effect of bridge deck concrete strength on composite deck-girder section capacity was investigated due to its possible effect on final section capacity. Finally, a comparison was made to compare the bridge cross-section of girders designed using regular 0.6 inch strands and the large-diameter 0.7 inch. The research findings showed that structural advantages of 0.7 inch strands allow for using fewer bridge girders, reduced material quantity, and light-weight members. The structural advantages of 0.7 inch strands are maximized when high strength concrete (HSC) are used in girder fabrication, and concrete of minimum 5ksi compressive strength is used in pouring bridge decks. The use of 0.7 inch strands in bridge industry can partially contribute to the improvement of bridge conditions, minimize construction cost, and reduce the construction duration of the project.
Keywords: 0.7 Inch Strands, I-Girders, Pretension, Flexure Capacity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2743197 Prediction of Optimum Cutting Parameters to obtain Desired Surface in Finish Pass end Milling of Aluminium Alloy with Carbide Tool using Artificial Neural Network
Authors: Anjan Kumar Kakati, M. Chandrasekaran, Amitava Mandal, Amit Kumar Singh
Abstract:
End milling process is one of the common metal cutting operations used for machining parts in manufacturing industry. It is usually performed at the final stage in manufacturing a product and surface roughness of the produced job plays an important role. In general, the surface roughness affects wear resistance, ductility, tensile, fatigue strength, etc., for machined parts and cannot be neglected in design. In the present work an experimental investigation of end milling of aluminium alloy with carbide tool is carried out and the effect of different cutting parameters on the response are studied with three-dimensional surface plots. An artificial neural network (ANN) is used to establish the relationship between the surface roughness and the input cutting parameters (i.e., spindle speed, feed, and depth of cut). The Matlab ANN toolbox works on feed forward back propagation algorithm is used for modeling purpose. 3-12-1 network structure having minimum average prediction error found as best network architecture for predicting surface roughness value. The network predicts surface roughness for unseen data and found that the result/prediction is better. For desired surface finish of the component to be produced there are many different combination of cutting parameters are available. The optimum cutting parameter for obtaining desired surface finish, to maximize tool life is predicted. The methodology is demonstrated, number of problems are solved and algorithm is coded in Matlab®.Keywords: End milling, Surface roughness, Neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2165196 Improving Similarity Search Using Clustered Data
Authors: Deokho Kim, Wonwoo Lee, Jaewoong Lee, Teresa Ng, Gun-Ill Lee, Jiwon Jeong
Abstract:
This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric.
Keywords: Visual search, deep learning, convolutional neural network, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 829195 Biomethanation of Palm Oil Mill Effluent (POME) by Membrane Anaerobic System (MAS) using POME as a Substrate
Authors: N.H. Abdurahman, Y. M. Rosli, N. H. Azhari, S. F. Tam
Abstract:
The direct discharge of palm oil mill effluent (POME) wastewater causes serious environmental pollution due to its high chemical oxygen demand (COD) and biochemical oxygen demand (BOD). Traditional ways for POME treatment have both economical and environmental disadvantages. In this study, a membrane anaerobic system (MAS) was used as an alternative, cost effective method for treating POME. Six steady states were attained as a part of a kinetic study that considered concentration ranges of 8,220 to 15,400 mg/l for mixed liquor suspended solids (MLSS) and 6,329 to 13,244 mg/l for mixed liquor volatile suspended solids (MLVSS). Kinetic equations from Monod, Contois and Chen & Hashimoto were employed to describe the kinetics of POME treatment at organic loading rates ranging from 2 to 13 kg COD/m3/d. throughout the experiment, the removal efficiency of COD was from 94.8 to 96.5% with hydraulic retention time, HRT from 400.6 to 5.7 days. The growth yield coefficient, Y was found to be 0.62gVSS/g COD the specific microorganism decay rate was 0.21 d-1 and the methane gas yield production rate was between 0.25 l/g COD/d and 0.58 l/g COD/d. Steady state influent COD concentrations increased from 18,302 mg/l in the first steady state to 43,500 mg/l in the sixth steady state. The minimum solids retention time, which was obtained from the three kinetic models ranged from 5 to 12.3 days. The k values were in the range of 0.35 – 0.519 g COD/ g VSS • d and values were between 0.26 and 0.379 d-1. The solids retention time (SRT) decreased from 800 days to 11.6 days. The complete treatment reduced the COD content to 2279 mg/l equivalent to a reduction of 94.8% reduction from the original.
Keywords: COD reduction, POME, kinetics, membrane, anaerobic, monod, contois equation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2568194 Bendability Analysis for Bending of C-Mn Steel Plates on Heavy Duty 3-Roller Bending Machine
Authors: Himanshu V. Gajjar, Anish H. Gandhi, Tanvir A Jafri, Harit K. Raval
Abstract:
Bendability is constrained by maximum top roller load imparting capacity of the machine. Maximum load is encountered during the edge pre-bending stage of roller bending. Capacity of 3-roller plate bending machine is specified by maximum thickness and minimum shell diameter combinations that can be pre-bend for given plate material of maximum width. Commercially available plate width or width of the plate that can be accommodated on machine decides the maximum rolling width. Original equipment manufacturers (OEM) provide the machine capacity chart based on reference material considering perfectly plastic material model. Reported work shows the bendability analysis of heavy duty 3-roller plate bending machine. The input variables for the industry are plate thickness, shell diameter and material property parameters, as it is fixed by the design. Analytical models of equivalent thickness, equivalent width and maximum width based on power law material model were derived to study the bendability. Equation of maximum width provides bendability for designed configuration i.e. material property, shell diameter and thickness combinations within the machine limitations. Equivalent thicknesses based on perfectly plastic and power law material model were compared for four different materials grades of C-Mn steel in order to predict the bend-ability. Effect of top roller offset on the bendability at maximum top roller load imparting capacity is reported.Keywords: 3-Roller bending, Bendability, Equivalent thickness, Equivalent width, Maximum width.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4611193 Automated Natural Hazard Zonation System with Internet-SMS Warning: Distributed GIS for Sustainable Societies Creating Schema & Interface for Mapping & Communication
Authors: Devanjan Bhattacharya, Jitka Komarkova
Abstract:
The research describes the implementation of a novel and stand-alone system for dynamic hazard warning. The system uses all existing infrastructure already in place like mobile networks, a laptop/PC and the small installation software. The geospatial dataset are the maps of a region which are again frugal. Hence there is no need to invest and it reaches everyone with a mobile. A novel architecture of hazard assessment and warning introduced where major technologies in ICT interfaced to give a unique WebGIS based dynamic real time geohazard warning communication system. A never before architecture introduced for integrating WebGIS with telecommunication technology. Existing technologies interfaced in a novel architectural design to address a neglected domain in a way never done before – through dynamically updatable WebGIS based warning communication. The work publishes new architecture and novelty in addressing hazard warning techniques in sustainable way and user friendly manner. Coupling of hazard zonation and hazard warning procedures into a single system has been shown. Generalized architecture for deciphering a range of geo-hazards has been developed. Hence the developmental work presented here can be summarized as the development of internet-SMS based automated geo-hazard warning communication system; integrating a warning communication system with a hazard evaluation system; interfacing different open-source technologies towards design and development of a warning system; modularization of different technologies towards development of a warning communication system; automated data creation, transformation and dissemination over different interfaces. The architecture of the developed warning system has been functionally automated as well as generalized enough that can be used for any hazard and setup requirement has been kept to a minimum.
Keywords: Geospatial, web-based GIS, geohazard, warning system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796192 Probabilistic Damage Tolerance Methodology for Solid Fan Blades and Discs
Authors: Andrej Golowin, Viktor Denk, Axel Riepe
Abstract:
Solid fan blades and discs in aero engines are subjected to high combined low and high cycle fatigue loads especially around the contact areas between blade and disc. Therefore, special coatings (e.g. dry film lubricant) and surface treatments (e.g. shot peening or laser shock peening) are applied to increase the strength with respect to combined cyclic fatigue and fretting fatigue, but also to improve damage tolerance capability. The traditional deterministic damage tolerance assessment based on fracture mechanics analysis, which treats service damage as an initial crack, often gives overly conservative results especially in the presence of vibratory stresses. A probabilistic damage tolerance methodology using crack initiation data has been developed for fan discs exposed to relatively high vibratory stresses in cross- and tail-wind conditions at certain resonance speeds for limited time periods. This Monte-Carlo based method uses a damage databank from similar designs, measured vibration levels at typical aircraft operations and wind conditions and experimental crack initiation data derived from testing of artificially damaged specimens with representative surface treatment under combined fatigue conditions. The proposed methodology leads to a more realistic prediction of the minimum damage tolerance life for the most critical locations applicable to modern fan disc designs.Keywords: Damage tolerance, Monte-Carlo method, fan blade and disc, laser shock peening.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577191 The Design of a Vehicle Traffic Flow Prediction Model for a Gauteng Freeway Based on an Ensemble of Multi-Layer Perceptron
Authors: Tebogo Emma Makaba, Barnabas Ndlovu Gatsheni
Abstract:
The cities of Johannesburg and Pretoria both located in the Gauteng province are separated by a distance of 58 km. The traffic queues on the Ben Schoeman freeway which connects these two cities can stretch for almost 1.5 km. Vehicle traffic congestion impacts negatively on the business and the commuter’s quality of life. The goal of this paper is to identify variables that influence the flow of traffic and to design a vehicle traffic prediction model, which will predict the traffic flow pattern in advance. The model will unable motorist to be able to make appropriate travel decisions ahead of time. The data used was collected by Mikro’s Traffic Monitoring (MTM). Multi-Layer perceptron (MLP) was used individually to construct the model and the MLP was also combined with Bagging ensemble method to training the data. The cross—validation method was used for evaluating the models. The results obtained from the techniques were compared using predictive and prediction costs. The cost was computed using combination of the loss matrix and the confusion matrix. The predicted models designed shows that the status of the traffic flow on the freeway can be predicted using the following parameters travel time, average speed, traffic volume and day of month. The implications of this work is that commuters will be able to spend less time travelling on the route and spend time with their families. The logistics industry will save more than twice what they are currently spending.Keywords: Bagging ensemble methods, confusion matrix, multi-layer perceptron, vehicle traffic flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1777190 Simulation of Complex-Shaped Particle Breakage Using the Discrete Element Method
Authors: Felix Platzer, Eric Fimbinger
Abstract:
In Discrete Element Method (DEM) simulations, the breakage behavior of particles can be simulated based on different principles. In the case of large, complex-shaped particles that show various breakage patterns depending on the scenario leading to the failure and often only break locally instead of fracturing completely, some of these principles do not lead to realistic results. The reason for this is that in said cases, the methods in question, such as the Particle Replacement Method (PRM) or Voronoi Fracture, replace the initial particle (that is intended to break) into several sub-particles when certain breakage criteria are reached, such as exceeding the fracture energy. That is why those methods are commonly used for the simulation of materials that fracture completely instead of breaking locally. That being the case, when simulating local failure, it is advisable to pre-build the initial particle from sub-particles that are bonded together. The dimensions of these sub-particles consequently define the minimum size of the fracture results. This structure of bonded sub-particles enables the initial particle to break at the location of the highest local loads – due to the failure of the bonds in those areas – with several sub-particle clusters being the result of the fracture, which can again also break locally. In this project, different methods for the generation and calibration of complex-shaped particle conglomerates using bonded particle modeling (BPM) to enable the ability to depict more realistic fracture behavior were evaluated based on the example of filter cake. The method that proved suitable for this purpose and which furthermore allows efficient and realistic simulation of breakage behavior of complex-shaped particles applicable to industrial-sized simulations is presented in this paper.
Keywords: Bonded particle model (BPM), DEM, filter cake, particle breakage, particle fracture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 402189 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method
Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan
Abstract:
The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.
Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345188 Battery Energy Storage System Economic Benefits Assessment on a Network Frequency Control
Authors: Kréhi Serge Agbli, Samuel Portebos, Michaël Salomon
Abstract:
Here a methodology is considered aiming at evaluating the economic benefit of the provision of a primary frequency control unit using a Battery Energy Storage System (BESS). In this methodology, two control types (basic and hysteresis) are implemented and the corresponding minimum energy storage system power allowing to maintain the frequency drop inside a given threshold under a given contingency is identified and compared using DigSilent’s PowerFactory software. Following this step, the corresponding energy storage capacity (in MWh) is calculated. As PowerFactory is dedicated to dynamic simulation for transient analysis, a first order model related to the IEEE 9 bus grid used for the analysis under PowerFactory is characterized and implemented on MATLAB-Simulink. Primary frequency control is simulated using the two control types over one-month grid's frequency deviation data on this Simulink model. This simulation results in the energy throughput both basic and hysteresis BESSs. It emerges that the 15 minutes operation band of the battery capacity allocated to frequency control is sufficient under the considered disturbances. A sensitivity analysis on the width of the control deadband is then performed for the two control types. The deadband width variation leads to an identical sizing with the hysteresis control showing a better frequency control at the cost of a higher delivered throughput compared to the basic control. An economic analysis comparing the cost of the sized BESS to the potential revenues is then performed.Keywords: Battery Energy Storage System, electrical network frequency stability, frequency control unit, PowerFactory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 808187 Hydrological Modelling of Geological Behaviours in Environmental Planning for Urban Areas
Authors: Sheetal Sharma
Abstract:
Runoff,decreasing water levels and recharge in urban areas have been a complex issue now a days pointing defective urban design and increasing demography as cause. Very less has been discussed or analysed for water sensitive Urban Master Plans or local area plans. Land use planning deals with land transformation from natural areas into developed ones, which lead to changes in natural environment. Elaborated knowledge of relationship between the existing patterns of land use-land cover and recharge with respect to prevailing soil below is less as compared to speed of development. The parameters of incompatibility between urban functions and the functions of the natural environment are becoming various. Changes in land patterns due to built up, pavements, roads and similar land cover affects surface water flow seriously. It also changes permeability and absorption characteristics of the soil. Urban planners need to know natural processes along with modern means and best technologies available,as there is a huge gap between basic knowledge of natural processes and its requirement for balanced development planning leading to minimum impact on water recharge. The present paper analyzes the variations in land use land cover and their impacts on surface flows and sub-surface recharge in study area. The methodology adopted was to analyse the changes in land use and land cover using GIS and Civil 3d auto cad. The variations were used in computer modeling using Storm-water Management Model to find out the runoff for various soil groups and resulting recharge observing water levels in POW data for last 40 years of the study area. Results were anlayzed again to find best correlations for sustainable recharge in urban areas.
Keywords: Geology, runoff, urban planning, land use-land cover.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1325186 The Effect of Main Factors on Forces during FSJ Processing of AA2024 Aluminum
Authors: Dunwen Zuo, Yongfang Deng, Bo Song
Abstract:
An attempt is made here to measure the forces of three directions, under conditions of different feed speeds, different tilt angles of tool and without or with the pin on the tool, by using octagonal ring dynamometer in the AA2024 aluminum FSJ (Friction Stir Joining) process, and investigate how four main factors influence forces in the FSJ process. It is found that, high feed speed lead to small feed force and small lateral force, but high feed speed leads to large feed force in the stable joining stage of process. As the rotational speed increasing, the time of axial force drop from the maximum to the minimum required increased in the push-up process. In the stable joining stage, the rotational speed has little effect on the feed force; large rotational speed leads to small lateral force and axial force. The maximum axial force increases as the tilt angle of tool increases at the downward movement stage. At the moment of start feeding, as tilt angle of tool increases, the amplitudes of the axial force increasing become large. In the stable joining stage, with the increase of tilt angle of tool, the axial force is increased, the lateral force is decreased, and the feed force almost unchanged. The tool with pin will decrease axial force in the downward movement stage. The feed force and lateral force will increase, but the axial force will reduced in the stable joining stage by using the tool with pin compare to by using the tool without pin.
Keywords: FSJ, force factor, AA2024, friction stir joining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1138185 Morphological and Electrical Characterization of Polyacrylonitrile Nanofibers Synthesized Using Electrospinning Method for Electrical Application
Authors: Divyanka Sontakke, Arpit Thakre, D. K Shinde, Sujata Parmeshwaran
Abstract:
Electrospinning is the most widely utilized method to create nanofibers because of the direct setup, the capacity to mass-deliver consistent nanofibers from different polymers, and the ability to produce ultrathin fibers with controllable diameters. Smooth and much arranged ultrafine Polyacrylonitrile (PAN) nanofibers with diameters going from submicron to nanometer were delivered utilizing Electrospinning technique. PAN powder was used as a precursor to prepare the solution utilized as a part of this process. At the point when the electrostatic repulsion contradicted surface tension, a charged stream of polymer solution was shot out from the head of the spinneret and along these lines ultrathin nonwoven fibers were created. The effect of electrospinning parameter such as applied voltage, feed rate, concentration of polymer solution and tip to collector distance on the morphology of electrospun PAN nanofibers were investigated. The nanofibers were heat treated for carbonization to examine the changes in properties and composition to make for electrical application. Scanning Electron Microscopy (SEM) was performed before and after carbonization to study electrical conductivity and morphological characterization. The SEM images have shown the uniform fiber diameter and no beads formation. The average diameter of the PAN fiber observed 365nm and 280nm for flat plat and rotating drum collector respectively. The four probe strategy was utilized to inspect the electrical conductivity of the nanofibers and the electrical conductivity is significantly improved with increase in oxidation temperature exposed.
Keywords: Electrospinning, polyacrylonitrile carbon nanofibres, heat treatment, electrical conductivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 689184 Fake Account Detection in Twitter Based on Minimum Weighted Feature set
Authors: Ahmed El Azab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny
Abstract:
Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, and then the determined factors are applied using different classification techniques. A comparison of the results of these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent researches in the same area; this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts; moreover, the study can be applied on different social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.Keywords: Fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5843183 Thermal Treatments and Characteristics Study On Unalloyed Structural (AISI 1140) Steel
Authors: S. S. Sharma, P. R. Prabhu, Rajagopal Chadaga
Abstract:
The main emphasis of metallurgists has been to process the materials to obtain the balanced mechanical properties for the given application. One of the processing routes to alter the properties is heat treatment. Nearly 90% of the structural applications are related to the medium carbon an alloyed steels and hence are regarded as structural steels. The major requirement in the conventional steel is to improve workability, toughness, hardness and grain refinement. In this view, it is proposed to study the mechanical and tribological properties of unalloyed structural (AISI 1140) steel with different thermal (heat) treatments like annealing, normalizing, tempering and hardening and compared with as brought (cold worked) specimen. All heat treatments are carried out in atmospheric condition. Hardening treatment improves hardness of the material, a marginal decrease in hardness value with improved ductility is observed in tempering. Annealing and normalizing improve ductility of the specimen. Normalized specimen shows ultimate ductility. Hardened specimen shows highest wear resistance in the initial period of slide wear where as above 25KM of sliding distance, as brought steel dominates the hardened specimen. Both mild and severe wear regions are observed. Microstructural analysis shows the existence of pearlitic structure in normalized specimen, lath martensitic structure in hardened, pearlitic, ferritic structure in annealed specimen.
Keywords: Annealing, hardness, heat treatment, normalizing, wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2114182 Markov Chain Based QoS Support for Wireless Body Area Network Communication in Health Monitoring Services
Authors: R. A. Isabel, E. Baburaj
Abstract:
Wireless Body Area Networks (WBANs) are essential for real-time health monitoring of patients and in diagnosing of many diseases. WBANs comprise many sensors to monitor a large range of ambient conditions. Quality of Service (QoS) is a key challenge in WBAN, because the different state information of the neighboring nodes has to be monitored in an accurate manner. However, energy consumption gets increased while predicting and maintaining the exact information in highly dynamic environments. In order to reduce energy consumption and end to end delay, Markov Chain Based Quality of Service Support (MC-QoSS) method is designed in the health monitoring services of WBAN communication. The energy consumption gets reduced by forming a Markov chain with high energy nodes in the sensor networks communication path. The low energy level sensor nodes are removed using transitional probability in order to reduce end to end delay. High energy nodes are formed in the chain structure of its corresponding path to enhance communication. After choosing the communication path through high energy nodes, the packets are sent to the sink node from the source node with a higher Packet Delivery Ratio. The simulation result shows that MC-QoSS method improves the packet delivery ratio and reduces energy consumption with minimum end to end delay, compared to existing methods.
Keywords: Wireless body area networks, quality of service, Markov chain, health monitoring services.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1440181 Reducing CO2 Emission Using EDA and Weighted Sum Model in Smart Parking System
Authors: Rahman Ali, Muhammad Sajjad, Farkhund Iqbal, Muhammad Sadiq Hassan Zada, Mohammed Hussain
Abstract:
Emission of Carbon Dioxide (CO2) has adversely affected the environment. One of the major sources of CO2 emission is transportation. In the last few decades, the increase in mobility of people using vehicles has enormously increased the emission of CO2 in the environment. To reduce CO2 emission, sustainable transportation system is required in which smart parking is one of the important measures that need to be established. To contribute to the issue of reducing the amount of CO2 emission, this research proposes a smart parking system. A cloud-based solution is provided to the drivers which automatically searches and recommends the most preferred parking slots. To determine preferences of the parking areas, this methodology exploits a number of unique parking features which ultimately results in the selection of a parking that leads to minimum level of CO2 emission from the current position of the vehicle. To realize the methodology, a scenario-based implementation is considered. During the implementation, a mobile application with GPS signals, vehicles with a number of vehicle features and a list of parking areas with parking features are used by sorting, multi-level filtering, exploratory data analysis (EDA, Analytical Hierarchy Process (AHP)) and weighted sum model (WSM) to rank the parking areas and recommend the drivers with top-k most preferred parking areas. In the EDA process, “2020testcar-2020-03-03”, a freely available dataset is used to estimate CO2 emission of a particular vehicle. To evaluate the system, results of the proposed system are compared with the conventional approach, which reveal that the proposed methodology supersedes the conventional one in reducing the emission of CO2 into the atmosphere.
Keywords: CO2 emission, IoT, EDA, Weighted Sum Model, WSM, regression, smart parking system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747180 Evaluating Probable Bending of Frames for Near-Field and Far-Field Records
Authors: Majid Saaly, Shahriar Tavousi Tafreshi, Mehdi Nazari Afshar
Abstract:
Most reinforced concrete structures are designed only under heavy loads have large transverse reinforcement spacing values, and therefore suffer severe failure after intense ground movements. The main goal of this paper is to compare the shear- and axial failure of concrete bending frames available in Tehran using Incremental Dynamic Analysis (IDA) under near- and far-field records. For this purpose, IDA of 5, 10, and 15-story concrete structures were done under seven far-fault records and five near-faults records. The results show that in two-dimensional models of short-rise, mid-rise and high-rise reinforced concrete frames located on Type-3 soil, increasing the distance of the transverse reinforcement can increase the maximum inter-story drift ratio values up to 37%. According to the existing results on 5, 10, and 15-story reinforced concrete models located on Type-3 soil, records with characteristics such as fling-step and directivity create maximum drift values between floors more than far-fault earthquakes. The results indicated that in the case of seismic excitation modes under earthquake encompassing directivity or fling-step, the probability values of failure and failure possibility increasing rate values are much smaller than the corresponding values of far-fault earthquakes. However, in near-fault frame records, the probability of exceedance occurs at lower seismic intensities compared to far-fault records.
Keywords: Directivity, fling-step, fragility curve, IDA, inter story drift ratio.v
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 367179 Conservation Agriculture Practice in Bangladesh: Farmers’ Socioeconomic Status and Soil Environment Perspective
Authors: Mohammad T. Uddin, Aurup R. Dhar
Abstract:
The study was conducted to assess the impact of conservation agriculture practice on farmers’ socioeconomic condition and soil environmental quality in Bangladesh. A total of 450 (i.e., 50 focal, 150 proximal and 250 control) farmers from five districts were selected for this study. Descriptive statistics like sum, averages, percentages, etc. were calculated to evaluate the socioeconomic data. Using Enyedi’s crop productivity index, it was found that the crop productivity of focal, proximal and control farmers was increased by 0.9, 1.2 and 1.3 percent, respectively. The result of DID (Difference-in-difference) analysis indicated that the impact of conservation agriculture practice on farmers’ average annual income was significant. Multidimensional poverty index (MPI) indicates that poverty in terms of deprivation of health, education and living standards was decreased; and a remarkable improvement in farmers’ socioeconomic status was found after adopting conservation agriculture practice. Most of the focal and proximal farmers stated about increased soil environmental condition where majority of control farmers stated about constant environmental condition in this regard. The Probit model reveals that minimum tillage operation, permanent organic soil cover, and application of compost and vermicompost were found significant factors affecting soil environmental quality under conservation agriculture. Input support, motivation, training programmes and extension services are recommended to implement in order to raise the awareness and enrich the knowledge of the farmers on conservation agriculture practice.
Keywords: Conservation agriculture, crop productivity, socioeconomic status, soil environment quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1150178 Influence of Combined Drill Coulters on Seedbed Compaction under Conservation Tillage Technologies
Authors: E. Šarauskis, L. Masilionyte, Z. Kriaučiūniene, K. Romaneckas
Abstract:
All over the world, including the Middle and East European countries, sustainable tillage and sowing technologies are applied increasingly broadly with a view to optimising soil resources, mitigating soil degradation processes, saving energy resources, preserving biological diversity, etc. As a result, altered conditions of tillage and sowing technological processes are faced inevitably. The purpose of this study is to determine the seedbed topsoil hardness when using a combined sowing coulter in different sustainable tillage technologies. The research involved a combined coulter consisting of two dissected blade discs and a shoe coulter. In order to determine soil hardness at the seedbed area, a multipenetrometer was used. It was found by experimental studies that in loosened soil, a combined sowing coulter equally suppresses the furrow bottom, walls and soil near the furrow; therefore, here, soil hardness was similar at all researched depths and no significant differences were established. In loosened and compacted (double-rolled) soil, the impact of a combined coulter on the hardness of seedbed soil surface was more considerable at a depth of 2 mm. Soil hardness at the furrow bottom and walls to a distance of up to 26 mm was 1.1 MPa. At a depth of 10 mm, the greatest hardness was established at the furrow bottom. In loosened and heavily compacted (rolled for 6 times) soil, at a depth of 2 and 10 mm a combined coulter most of all compacted the furrow bottom, which has a hardness of 1.8 MPa. At a depth of 20 mm, soil hardness within the whole investigated area varied insignificantly and fluctuated by around 2.0 MPa. The hardness of furrow walls and soil near the furrow was by approximately 1.0 MPa lower than that at the furrow bottomKeywords: Coulters design, seedbed, soil hardness, combined coulters, soil compaction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1415177 The Effects of Shot and Grit Blasting Process Parameters on Steel Pipes Coating Adhesion
Authors: Saeed Khorasanizadeh
Abstract:
Adhesion strength of exterior or interior coating of steel pipes is too important. Increasing of coating adhesion on surfaces can increase the life time of coating, safety factor of transmitting line pipe and decreasing the rate of corrosion and costs. Preparation of steel pipe surfaces before doing the coating process is done by shot and grit blasting. This is a mechanical way to do it. Some effective parameters on that process, are particle size of abrasives, distance to surface, rate of abrasive flow, abrasive physical properties, shapes, selection of abrasive, kind of machine and its power, standard of surface cleanness degree, roughness, time of blasting and weather humidity. This search intended to find some better conditions which improve the surface preparation, adhesion strength and corrosion resistance of coating. So, this paper has studied the effect of varying abrasive flow rate, changing the abrasive particle size, time of surface blasting on steel surface roughness and over blasting on it by using the centrifugal blasting machine. After preparation of numbers of steel samples (according to API 5L X52) and applying epoxy powder coating on them, to compare strength adhesion of coating by Pull-Off test. The results have shown that, increasing the abrasive particles size and flow rate, can increase the steel surface roughness and coating adhesion strength but increasing the blasting time can do surface over blasting and increasing surface temperature and hardness too, change, decreasing steel surface roughness and coating adhesion strength.Keywords: surface preparation, abrasive particles, adhesionstrength
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9079176 Tom Stoppard: The Amorality of the Artist
Authors: Majeed Mohammed Midhin, Clare Finburgh
Abstract:
To maintain a healthy balanced loyalty, whether to art or society, posits a debatable issue. The artist is always on the look out for the potential tension between those two realms. Therefore, one of the most painful dilemmas the artist finds is how to function in a society without sacrificing the aesthetic values of his/her work. In other words, the life-long awareness of failure which derives from the concept of the artist as caught between unflattering social realities and the need to invent genuine art forms becomes a fertilizing soil for the artists to be tackled. Thus, within the framework of this dilemma, the question of the responsibility of the artist and the relationship of the art to politics will be illuminating. To a larger extent, however, in drama, this dilemma is represented by the fictional characters of the play. The present paper tackles the idea of the amorality of the artist in selected plays by Tom Stoppard. However, Stoppard’s awareness of his situation as a refugee has led him to keep at a distance from politics. He tried hard to avoid any intervention into the realms of political debate, especially in his earliest work. On the one hand, it is not meant that he did not interest in politics as such, but rather he preferred to question it than to create a fixed ideological position. On the other hand, Stoppard’s refusal to intervene in politics is ascribed to his feeling of gratitude to Britain where he settled. As a result, Stoppard has frequently been criticized for a lack of political engagement and also for not leaning too much for the left when he does engage. His reaction to these public criticisms finds expression in his self-conscious statements which defensively stressed the artifice of his work. He, like Oscar Wilde thinks that the responsibility of the artist is devoted to the realm of his/her art. Consequently, his consciousness for the role of the artist is truly reflected in his two plays, Artist Descending a Staircase (1972) and Travesties (1974).
Keywords: Amorality, responsibility, politics, ideology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716175 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images
Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn
Abstract:
The detection and segmentation of mitochondria from fluorescence microscopy is crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. Although there exists a number of open-source software tools and artificial intelligence (AI) methods designed for analyzing mitochondrial images, the availability of only a few combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compactibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source Python and OpenCV library, the algorithms are implemented in three stages: pre-processing; image binarization; and coarse-to-fine segmentation. The proposed model is validated using the fluorescence mitochondrial dataset. Ground truth labels generated using Labkit were also used to evaluate the performance of our detection and segmentation model using precision, recall and rand index. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks concludes the paper.
Keywords: 2D, Binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 478174 Novel Adaptive Channel Equalization Algorithms by Statistical Sampling
Authors: János Levendovszky, András Oláh
Abstract:
In this paper, novel statistical sampling based equalization techniques and CNN based detection are proposed to increase the spectral efficiency of multiuser communication systems over fading channels. Multiuser communication combined with selective fading can result in interferences which severely deteriorate the quality of service in wireless data transmission (e.g. CDMA in mobile communication). The paper introduces new equalization methods to combat interferences by minimizing the Bit Error Rate (BER) as a function of the equalizer coefficients. This provides higher performance than the traditional Minimum Mean Square Error equalization. Since the calculation of BER as a function of the equalizer coefficients is of exponential complexity, statistical sampling methods are proposed to approximate the gradient which yields fast equalization and superior performance to the traditional algorithms. Efficient estimation of the gradient is achieved by using stratified sampling and the Li-Silvester bounds. A simple mechanism is derived to identify the dominant samples in real-time, for the sake of efficient estimation. The equalizer weights are adapted recursively by minimizing the estimated BER. The near-optimal performance of the new algorithms is also demonstrated by extensive simulations. The paper has also developed a (Cellular Neural Network) CNN based approach to detection. In this case fast quadratic optimization has been carried out by t, whereas the task of equalizer is to ensure the required template structure (sparseness) for the CNN. The performance of the method has also been analyzed by simulations.
Keywords: Cellular Neural Network, channel equalization, communication over fading channels, multiuser communication, spectral efficiency, statistical sampling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521173 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding
Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf
Abstract:
In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795172 Fault Classification of Double Circuit Transmission Line Using Artificial Neural Network
Authors: Anamika Jain, A. S. Thoke, R. N. Patel
Abstract:
This paper addresses the problems encountered by conventional distance relays when protecting double-circuit transmission lines. The problems arise principally as a result of the mutual coupling between the two circuits under different fault conditions; this mutual coupling is highly nonlinear in nature. An adaptive protection scheme is proposed for such lines based on application of artificial neural network (ANN). ANN has the ability to classify the nonlinear relationship between measured signals by identifying different patterns of the associated signals. One of the key points of the present work is that only current signals measured at local end have been used to detect and classify the faults in the double circuit transmission line with double end infeed. The adaptive protection scheme is tested under a specific fault type, but varying fault location, fault resistance, fault inception angle and with remote end infeed. An improved performance is experienced once the neural network is trained adequately, which performs precisely when faced with different system parameters and conditions. The entire test results clearly show that the fault is detected and classified within a quarter cycle; thus the proposed adaptive protection technique is well suited for double circuit transmission line fault detection & classification. Results of performance studies show that the proposed neural network-based module can improve the performance of conventional fault selection algorithms.
Keywords: Double circuit transmission line, Fault detection and classification, High impedance fault and Artificial Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3188171 Impact of Vehicle Travel Characteristics on Level of Service: A Comparative Analysis of Rural and Urban Freeways
Authors: Anwaar Ahmed, Muhammad Bilal Khurshid, Samuel Labi
Abstract:
The effect of trucks on the level of service is determined by considering passenger car equivalents (PCE) of trucks. The current version of Highway Capacity Manual (HCM) uses a single PCE value for all tucks combined. However, the composition of truck traffic varies from location to location; therefore, a single PCE value for all trucks may not correctly represent the impact of truck traffic at specific locations. Consequently, present study developed separate PCE values for single-unit and combination trucks to replace the single value provided in the HCM on different freeways. Site specific PCE values, were developed using concept of spatial lagging headways (that is the distance between rear bumpers of two vehicles in a traffic stream) measured from field traffic data. The study used data from four locations on a single urban freeway and three different rural freeways in Indiana. Three-stage-leastsquares (3SLS) regression techniques were used to generate models that predicted lagging headways for passenger cars, single unit trucks (SUT), and combination trucks (CT). The estimated PCE values for single-unit and combination truck for basic urban freeways (level terrain) were: 1.35 and 1.60, respectively. For rural freeways the estimated PCE values for single-unit and combination truck were: 1.30 and 1.45, respectively. As expected, traffic variables such as vehicle flow rates and speed have significant impacts on vehicle headways. Study results revealed that the use of separate PCE values for different truck classes can have significant influence on the LOS estimation.
Keywords: Level of Service, Capacity Analysis, Lagging Headway.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2098170 Influence of Compactive Efforts on Cement- Bagasse Ash Treatment of Expansive Black Cotton Soil
Authors: Moses, G, Osinubi, K. J.
Abstract:
A laboratory study on the influence of compactive effort on expansive black cotton specimens treated with up to 8% ordinary Portland cement (OPC) admixed with up to 8% bagasse ash (BA) by dry weight of soil and compacted using the energies of the standard Proctor (SP), West African Standard (WAS) or “intermediate” and modified Proctor (MP) were undertaken. The expansive black cotton soil was classified as A-7-6 (16) or CL using the American Association of Highway and Transportation Officials (AASHTO) and Unified Soil Classification System (USCS), respectively. The 7day unconfined compressive strength (UCS) values of the natural soil for SP, WAS and MP compactive efforts are 286, 401 and 515kN/m2 respectively, while peak values of 1019, 1328 and 1420kN/m2 recorded at 8% OPC/ 6% BA, 8% OPC/ 2% BA and 6% OPC/ 4% BA treatments, respectively were less than the UCS value of 1710kN/m2 conventionally used as criterion for adequate cement stabilization. The soaked California bearing ratio (CBR) values of the OPC/BA stabilized soil increased with higher energy level from 2, 4 and 10% for the natural soil to Peak values of 55, 18 and 8% were recorded at 8% OPC/4% BA 8% OPC/2% BA and 8% OPC/4% BA, treatments when SP, WAS and MP compactive effort were used, respectively. The durability of specimens was determined by immersion in water. Soils treatment at 8% OPC/ 4% BA blend gave a value of 50% resistance to loss in strength value which is acceptable because of the harsh test condition of 7 days soaking period specimens were subjected instead of the 4 days soaking period that specified a minimum resistance to loss in strength of 80%. Finally An optimal blend of is 8% OPC/ 4% BA is recommended for treatment of expansive black cotton soil for use as a sub-base material.
Keywords: Bagasse ash, California bearing ratio, Compaction, Durability, Ordinary Portland cement, Unconfined compressive strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3566