Search results for: digital surface model
20072 An Improved Prediction Model of Ozone Concentration Time Series Based on Chaotic Approach
Authors: Nor Zila Abd Hamid, Mohd Salmi M. Noorani
Abstract:
This study is focused on the development of prediction models of the Ozone concentration time series. Prediction model is built based on chaotic approach. Firstly, the chaotic nature of the time series is detected by means of phase space plot and the Cao method. Then, the prediction model is built and the local linear approximation method is used for the forecasting purposes. Traditional prediction of autoregressive linear model is also built. Moreover, an improvement in local linear approximation method is also performed. Prediction models are applied to the hourly ozone time series observed at the benchmark station in Malaysia. Comparison of all models through the calculation of mean absolute error, root mean squared error and correlation coefficient shows that the one with improved prediction method is the best. Thus, chaotic approach is a good approach to be used to develop a prediction model for the Ozone concentration time series.Keywords: chaotic approach, phase space, Cao method, local linear approximation method
Procedia PDF Downloads 33620071 Evaluation of Coastal Erosion in the Jurisdiction of the Municipalities of Puerto Colombia and Tubará, Atlántico – Colombia in Google Earth Engine with Landsat and Sentinel 2 Images
Authors: Francisco Reyes, Hector Ramirez
Abstract:
In the coastal zones are home to mangrove swamps, coral reefs, and seagrass ecosystems, which are the most biodiverse and fragile on the planet. These areas support a great diversity of marine life; they are also extraordinarily important for humans in the provision of food, water, wood, and other associated goods and services; they also contribute to climate regulation. The lack of an automated model that generates information on the dynamics of changes in coastlines and coastal erosion is identified as a central problem. Coastlines were determined from 1984 to 2020 on the Google Earth platform Engine from Landsat and Sentinel images, using the Normalized Differential Water Index (MNDWI) and Digital Shoreline Analysis System (DSAS) v5.0. Starting from the 2020 coastline, the 10-year prediction (Year 2031) was determined with the erosion of 238.32 hectares and an accretion of 181.96 hectares, while the 20-year prediction (Year 2041) will be presented an erosion of 544.04 hectares and an accretion of 133.94 hectares. The erosion and accretion of Playa Muelle in the municipality of Puerto Colombia were established, which will register the highest value of erosion. The coverage that presented the greatest change was that of artificialized Territories.Keywords: coastline, coastal erosion, MNDWI, Google Earth Engine, Colombia
Procedia PDF Downloads 12620070 Influence of Long-Term Variability in Atmospheric Parameters on Ocean State over the Head Bay of Bengal
Authors: Anindita Patra, Prasad K. Bhaskaran
Abstract:
The atmosphere-ocean is a dynamically linked system that influences the exchange of energy, mass, and gas at the air-sea interface. The exchange of energy takes place in the form of sensible heat, latent heat, and momentum commonly referred to as fluxes along the atmosphere-ocean boundary. The large scale features such as El Nino and Southern Oscillation (ENSO) is a classic example on the interaction mechanism that occurs along the air-sea interface that deals with the inter-annual variability of the Earth’s Climate System. Most importantly the ocean and atmosphere as a coupled system acts in tandem thereby maintaining the energy balance of the climate system, a manifestation of the coupled air-sea interaction process. The present work is an attempt to understand the long-term variability in atmospheric parameters (from surface to upper levels) and investigate their role in influencing the surface ocean variables. More specifically the influence of atmospheric circulation and its variability influencing the mean Sea Level Pressure (SLP) has been explored. The study reports on a critical examination of both ocean-atmosphere parameters during a monsoon season over the head Bay of Bengal region. A trend analysis has been carried out for several atmospheric parameters such as the air temperature, geo-potential height, and omega (vertical velocity) for different vertical levels in the atmosphere (from surface to the troposphere) covering a period from 1992 to 2012. The Reanalysis 2 dataset from the National Centers for Environmental Prediction-Department of Energy (NCEP-DOE) was used in this study. The study signifies that the variability in air temperature and omega corroborates with the variation noticed in geo-potential height. Further, the study advocates that for the lower atmosphere the geo-potential heights depict a typical east-west contrast exhibiting a zonal dipole behavior over the study domain. In addition, the study clearly brings to light that the variations over different levels in the atmosphere plays a pivotal role in supporting the observed dipole pattern as clearly evidenced from the trends in SLP, associated surface wind speed and significant wave height over the study domain.Keywords: air temperature, geopotential height, head Bay of Bengal, long-term variability, NCEP reanalysis 2, omega, wind-waves
Procedia PDF Downloads 22920069 Assessment of Chemical and Physical Properties of Surface Water Resources in Flood Affected Area
Authors: Siti Hajar Ya’acob, Nor Sayzwani Sukri, Farah Khaliz Kedri, Rozidaini Mohd Ghazi, Nik Raihan Nik Yusoff, Aweng A/L Eh Rak
Abstract:
Flood event that occurred in mid-December 2014 in East Coast of Peninsular Malaysia has driven attention from the public nationwide. Apart from loss and damage of properties and belongings, the massive flood event has introduced environmental disturbances on surface water resources in such flood affected area. A study has been conducted to measure the physical and chemical composition of Galas River and Pergau River prior to identification the flood impact towards environmental deterioration in surrounding area. Samples that have been collected were analyzed in-situ using YSI portable instrument and also in the laboratory for acid digestion and heavy metals analysis using Atomic Absorption Spectroscopy (AAS). Results showed that range of temperature (0C), DO (mg/L), Ec (µs/cm), TDS (mg/L), turbidity (NTU), pH, and salinity were 25.05-26.65, 1.51-5.85, 0.032-0.054, 0.022-0.035, 23.2-76.4, 3.46-7.31, and 0.01-0.02 respectively. The results from this study could be used as a primary database to evaluate the status of water quality of the respective river after the massive flood.Keywords: flood, river, heavy metals, AAS
Procedia PDF Downloads 38520068 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 28620067 Calibration and Validation of the Aquacrop Model for Simulating Growth and Yield of Rain-fed Sesame (Sesamum indicum L.) Under Different Soil Fertility Levels in the Semi-arid Areas of Tigray
Authors: Abadi Berhane, Walelign Worku, Berhanu Abrha, Gebre Hadgu, Tigray
Abstract:
Sesame is an important oilseed crop in Ethiopia; which is the second most exported agricultural commodity next to coffee. However, there is poor soil fertility management and a research-led farming system for the crop. The AquaCrop model was applied as a decision-support tool; which performs a semi-quantitative approach to simulate the yield of crops under different soil fertility levels. The objective of this experiment was to calibrate and validated the AquaCrop model for simulating the growth and yield of sesame under different nitrogen fertilizer levels and to test the performance of the model as a decision-support tool for improved sesame cultivation in the study area. The experiment was laid out as a randomized complete block design (RCBD) in a factorial arrangement in the 2016, 2017, and 2018 main cropping seasons. In this experiment, four nitrogen fertilizer rates; 0, 23, 46, and 69 Kg/ha nitrogen, and three improved varieties (Setit-1, Setit-2, and Humera-1). In the meantime, growth, yield, and yield components of sesame were collected from each treatment. Coefficient of determination (R2), Root mean square error (RMSE), Normalized root mean square error (N-RMSE), Model efficiency (E), and Degree of agreement (D) were used to test the performance of the model. The results indicated that the AquaCrop model successfully simulated soil water content with R2 varying from 0.92 to 0.98, RMSE 6.5 to 13.9 mm, E 0.78 to 0.94, and D 0.95 to 0.99; and the corresponding values for AB also varied from 0.92 to 0.98, 0.33 to 0.54 tons/ha, 0.74 to 0.93, and 0.9 to 0.98, respectively. The results on the canopy cover of sesame also showed that the model acceptably simulated canopy cover with R2 varying from 0.95 to 0.99, and a RMSE of 5.3 to 8.6%. The AquaCrop model was appropriately calibrated to simulate soil water content, canopy cover, aboveground biomass, and sesame yield; the results indicated that the model adequately simulated the growth and yield of sesame under the different nitrogen fertilizer levels. The AquaCrop model might be an important tool for improved soil fertility management and yield enhancement strategies of sesame. Hence, the model might be applied as a decision-support tool in soil fertility management in sesame production.Keywords: aquacrop model, sesame, normalized water productivity, nitrogen fertilizer
Procedia PDF Downloads 7720066 Microwave-Assisted 3D Porous Graphene for Its Multi-Functionalities
Authors: Jung-Hwan Oh, Rajesh Kumar, Il-Kwon Oh
Abstract:
Porous graphene has extensive potential applications in variety of fields such as hydrogen storage, CO oxidation, gas separation, supercapacitors, fuel cells, nanoelectronics, oil adsorption, and so on. However, the generation of some carbon atoms vacancies for precise small holes have been not extensively studied to prevent the agglomerates of graphene sheets and to obtain porous graphene with high surface area. Recently, many research efforts have been presented to develop physical and chemical synthetic approaches for porous graphene. But physical method has very high cost of manufacture and chemical method consumes so many hours for porous graphene. Herein, we propose a porous graphene contained holes with atomic scale precision by embedding metal nano-particles through microwave irradiation for hydrogen storage and CO oxidation multi- functionalities. This proposed synthetic method is appropriate for fast and convenient production of three dimensional nanostructures, which have nanoholes on the graphene surface in consequence of microwave irradiation. The metal nanoparticles are dispersed quickly on the graphene surface and generated uniform nanoholes on the graphene nanosheets. The morphological and structural characterization of the porous graphene were examined by scanning electron microscopy (SEM), transmission scanning electron microscopy (TEM) and RAMAN spectroscopy, respectively. The metal nanoparticle-embedded porous graphene exhibits a microporous volume of 2.586cm3g-1 with an average pore radius of 0.75 nm. HR-TEM analysis was carried out to further characterize the microstructures. By investigating the RAMAN spectra, we can understand the structural changes of graphene. The results of this work demonstrate a possibility to produce a new class of porous graphene. Furthermore, the newly acquired knowledge for the diffusion into graphene can provide useful guidance for the development of the growth of nanostructure.Keywords: CO oxidation, hydrogen storage, nanocomposites, porous graphene
Procedia PDF Downloads 37620065 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 22020064 The Use of Performance Indicators for Evaluating Models of Drying Jackfruit (Artocarpus heterophyllus L.): Page, Midilli, and Lewis
Authors: D. S. C. Soares, D. G. Costa, J. T. S., A. K. S. Abud, T. P. Nunes, A. M. Oliveira Júnior
Abstract:
Mathematical models of drying are used for the purpose of understanding the drying process in order to determine important parameters for design and operation of the dryer. The jackfruit is a fruit with high consumption in the Northeast and perishability. It is necessary to apply techniques to improve their conservation for longer in order to diffuse it by regions with low consumption. This study aimed to analyse several mathematical models (Page, Lewis, and Midilli) to indicate one that best fits the conditions of convective drying process using performance indicators associated with each model: accuracy (Af) and noise factors (Bf), mean square error (RMSE) and standard error of prediction (% SEP). Jackfruit drying was carried out in convective type tray dryer at a temperature of 50°C for 9 hours. It is observed that the model Midili was more accurate with Af: 1.39, Bf: 1.33, RMSE: 0.01%, and SEP: 5.34. However, the use of the Model Midilli is not appropriate for purposes of control process due to need four tuning parameters. With the performance indicators used in this paper, the Page model showed similar results with only two parameters. It is concluded that the best correlation between the experimental and estimated data is given by the Page’s model.Keywords: drying, models, jackfruit, biotechnology
Procedia PDF Downloads 38320063 Business Logic and Environmental Policy, a Research Agenda for the Business-to-Citizen Business Model
Authors: Mats Nilsson
Abstract:
The European electricity markets have been changing from a regulated market, to in some places a deregulated market, and are now experiencing a strong influence of renewable support systems. Firm’s that rely on subsidies have a different business logic than firms acting in a market context. The article proposes that an offspring to the regular business models, the business-to-citizen, should be used. The case of the European electricity market frames the concept of a business-citizen business model, and a research agenda for this concept is outlined.Keywords: business logic, business model, subsidies, business-to-citizen
Procedia PDF Downloads 46720062 Estimation of Exhaust and Non-Exhaust Particulate Matter Emissions’ Share from On-Road Vehicles in Addis Ababa City
Authors: Solomon Neway Jida, Jean-Francois Hetet, Pascal Chesse
Abstract:
Vehicular emission is the key source of air pollution in the urban environment. This includes both fine particles (PM2.5) and coarse particulate matters (PM10). However, particulate matter emissions from road traffic comprise emissions from exhaust tailpipe and emissions due to wear and tear of the vehicle part such as brake, tire and clutch and re-suspension of dust (non-exhaust emission). This study estimates the share of the two sources of pollutant particle emissions from on-roadside vehicles in the Addis Ababa municipality, Ethiopia. To calculate its share, two methods were applied; the exhaust-tailpipe emissions were calculated using the Europeans emission inventory Tier II method and Tier I for the non-exhaust emissions (like vehicle tire wear, brake, and road surface wear). The results show that of the total traffic-related particulate emissions in the city, 63% emitted from vehicle exhaust and the remaining 37% from non-exhaust sources. The annual roads transport exhaust emission shares around 2394 tons of particles from all vehicle categories. However, from the total yearly non-exhaust particulate matter emissions’ contribution, tire and brake wear shared around 65% and 35% emanated by road-surface wear. Furthermore, vehicle tire and brake wear were responsible for annual 584.8 tons of coarse particles (PM10) and 314.4 tons of fine particle matter (PM2.5) emissions in the city whereas surface wear emissions were responsible for around 313.7 tons of PM10 and 169.9 tons of PM2.5 pollutant emissions in the city. This suggests that non-exhaust sources might be as significant as exhaust sources and have a considerable contribution to the impact on air quality.Keywords: Addis Ababa, automotive emission, emission estimation, particulate matters
Procedia PDF Downloads 13320061 Improved Small-Signal Characteristics of Infrared 850 nm Top-Emitting Vertical-Cavity Lasers
Authors: Ahmad Al-Omari, Osama Khreis, Ahmad M. K. Dagamseh, Abdullah Ababneh, Kevin Lear
Abstract:
High-speed infrared vertical-cavity surface-emitting laser diodes (VCSELs) with Cu-plated heat sinks were fabricated and tested. VCSELs with 10 mm aperture diameter and 4 mm of electroplated copper demonstrated a -3dB modulation bandwidth (f-3dB) of 14 GHz and a resonance frequency (fR) of 9.5 GHz at a bias current density (Jbias) of only 4.3 kA/cm2, which corresponds to an improved f-3dB2/Jbias ratio of 44 GHz2/kA/cm2. At higher and lower bias current densities, the f-3dB2/ Jbias ratio decreased to about 30 GHz2/kA/cm2 and 18 GHz2/kA/cm2, respectively. Examination of the analogue modulation response demonstrated that the presented VCSELs displayed a steady f-3dB/ fR ratio of 1.41±10% over the whole range of the bias current (1.3Ith to 6.2Ith). The devices also demonstrated a maximum modulation bandwidth (f-3dB max) of more than 16 GHz at a bias current less than the industrial bias current standard for reliability by 25%.Keywords: current density, high-speed VCSELs, modulation bandwidth, small-signal characteristics, thermal impedance, vertical-cavity surface-emitting lasers
Procedia PDF Downloads 57620060 Multi-Objective Optimization of Assembly Manufacturing Factory Setups
Authors: Andreas Lind, Aitor Iriondo Pascual, Dan Hogberg, Lars Hanson
Abstract:
Factory setup lifecycles are most often described and prepared in CAD environments; the preparation is based on experience and inputs from several cross-disciplinary processes. Early in the factory setup preparation, a so-called block layout is created. The intention is to describe a high-level view of the intended factory setup and to claim area reservations and allocations. Factory areas are then blocked, i.e., targeted to be used for specific intended resources and processes, later redefined with detailed factory setup layouts. Each detailed layout is based on the block layout and inputs from cross-disciplinary preparation processes, such as manufacturing sequence, productivity, workers’ workplace requirements, and resource setup preparation. However, this activity is often not carried out with all variables considered simultaneously, which might entail a risk of sub-optimizing the detailed layout based on manual decisions. Therefore, this work aims to realize a digital method for assembly manufacturing layout planning where productivity, area utilization, and ergonomics can be considered simultaneously in a cross-disciplinary manner. The purpose of the digital method is to support engineers in finding optimized designs of detailed layouts for assembly manufacturing factories, thereby facilitating better decisions regarding setups of future factories. Input datasets are company-specific descriptions of required dimensions for specific area reservations, such as defined dimensions of a worker’s workplace, material façades, aisles, and the sequence to realize the product assembly manufacturing process. To test and iteratively develop the digital method, a demonstrator has been developed with an adaptation of existing software that simulates and proposes optimized designs of detailed layouts. Since the method is to consider productivity, ergonomics, area utilization, and constraints from the automatically generated block layout, a multi-objective optimization approach is utilized. In the demonstrator, the input data are sent to the simulation software industrial path solutions (IPS). Based on the input and Lua scripts, the IPS software generates a block layout in compliance with the company’s defined dimensions of area reservations. Communication is then established between the IPS and the software EPP (Ergonomics in Productivity Platform), including intended resource descriptions, assembly manufacturing process, and manikin (digital human) resources. Using multi-objective optimization approaches, the EPP software then calculates layout proposals that are sent iteratively and simulated and rendered in IPS, following the rules and regulations defined in the block layout as well as productivity and ergonomics constraints and objectives. The software demonstrator is promising. The software can handle several parameters to optimize the detailed layout simultaneously and can put forward several proposals. It can optimize multiple parameters or weight the parameters to fine-tune the optimal result of the detailed layout. The intention of the demonstrator is to make the preparation between cross-disciplinary silos transparent and achieve a common preparation of the assembly manufacturing factory setup, thereby facilitating better decisions.Keywords: factory setup, multi-objective, optimization, simulation
Procedia PDF Downloads 15620059 Effect of Process Parameters on Tensile Strength of Aluminum Alloy ADC 10 Produced through Ceramic Shell Investment Casting
Authors: Balwinder Singh
Abstract:
Castings are produced by using aluminum alloy ADC 10 through the process of Ceramic Shell Investment Casting. Experiments are conducted as per the Taguchi L9 orthogonal array. In order to evaluate the effect of process parameters such as mould preheat temperature, preheat time, firing temperature and pouring temperature on surface roughness of ceramic shell investment castings, the Taguchi parameter design and optimization approach is used. Plots of means of significant factors and S/N ratios have been used to determine the best relationship between the responses and model parameters. It is found that the pouring temperature is the most significant factor. The best tensile strength of aluminum alloy ADC 10 is given by 150 ºC shell preheat temperature, 45 minutes preheat time, 900 ºC firing temperature, 650 ºC pouring temperature.Keywords: investment casting, shell preheat temperature, firing temperature, Taguchi method
Procedia PDF Downloads 17620058 A Comprehensive Framework for Fraud Prevention and Customer Feedback Classification in E-Commerce
Authors: Samhita Mummadi, Sree Divya Nagalli, Harshini Vemuri, Saketh Charan Nakka, Sumesh K. J.
Abstract:
One of the most significant challenges faced by people in today’s digital era is an alarming increase in fraudulent activities on online platforms. The fascination with online shopping to avoid long queues in shopping malls, the availability of a variety of products, and home delivery of goods have paved the way for a rapid increase in vast online shopping platforms. This has had a major impact on increasing fraudulent activities as well. This loop of online shopping and transactions has paved the way for fraudulent users to commit fraud. For instance, consider a store that orders thousands of products all at once, but what’s fishy about this is the massive number of items purchased and their transactions turning out to be fraud, leading to a huge loss for the seller. Considering scenarios like these underscores the urgent need to introduce machine learning approaches to combat fraud in online shopping. By leveraging robust algorithms, namely KNN, Decision Trees, and Random Forest, which are highly effective in generating accurate results, this research endeavors to discern patterns indicative of fraudulent behavior within transactional data. Introducing a comprehensive solution to this problem in order to empower e-commerce administrators in timely fraud detection and prevention is the primary motive and the main focus. In addition to that, sentiment analysis is harnessed in the model so that the e-commerce admin can tailor to the customer’s and consumer’s concerns, feedback, and comments, allowing the admin to improve the user’s experience. The ultimate objective of this study is to ramp up online shopping platforms against fraud and ensure a safer shopping experience. This paper underscores a model accuracy of 84%. All the findings and observations that were noted during our work lay the groundwork for future advancements in the development of more resilient and adaptive fraud detection systems, which will become crucial as technologies continue to evolve.Keywords: behavior analysis, feature selection, Fraudulent pattern recognition, imbalanced classification, transactional anomalies
Procedia PDF Downloads 3520057 Riesz Mixture Model for Brain Tumor Detection
Authors: Mouna Zitouni, Mariem Tounsi
Abstract:
This research introduces an application of the Riesz mixture model for medical image segmentation for accurate diagnosis and treatment of brain tumors. We propose a pixel classification technique based on the Riesz distribution, derived from an extended Bartlett decomposition. To our knowledge, this is the first study addressing this approach. The Expectation-Maximization algorithm is implemented for parameter estimation. A comparative analysis, using both synthetic and real brain images, demonstrates the superiority of the Riesz model over a recent method based on the Wishart distribution.Keywords: EM algorithm, segmentation, Riesz probability distribution, Wishart probability distribution
Procedia PDF Downloads 2520056 Coarse-Grained Computational Fluid Dynamics-Discrete Element Method Modelling of the Multiphase Flow in Hydrocyclones
Authors: Li Ji, Kaiwei Chu, Shibo Kuang, Aibing Yu
Abstract:
Hydrocyclones are widely used to classify particles by size in industries such as mineral processing and chemical processing. The particles to be handled usually have a broad range of size distributions and sometimes density distributions, which has to be properly considered, causing challenges in the modelling of hydrocyclone. The combined approach of Computational Fluid Dynamics (CFD) and Discrete Element Method (DEM) offers convenience to model particle size/density distribution. However, its direct application to hydrocyclones is computationally prohibitive because there are billions of particles involved. In this work, a CFD-DEM model with the concept of the coarse-grained (CG) model is developed to model the solid-fluid flow in a hydrocyclone. The DEM is used to model the motion of discrete particles by applying Newton’s laws of motion. Here, a particle assembly containing a certain number of particles with same properties is treated as one CG particle. The CFD is used to model the liquid flow by numerically solving the local-averaged Navier-Stokes equations facilitated with the Volume of Fluid (VOF) model to capture air-core. The results are analyzed in terms of fluid and solid flow structures, and particle-fluid, particle-particle and particle-wall interaction forces. Furthermore, the calculated separation performance is compared with the measurements. The results obtained from the present study indicate that this approach can offer an alternative way to examine the flow and performance of hydrocyclonesKeywords: computational fluid dynamics, discrete element method, hydrocyclone, multiphase flow
Procedia PDF Downloads 41120055 Application of Ground-Penetrating Radar in Environmental Hazards
Authors: Kambiz Teimour Najad
Abstract:
The basic methodology of GPR involves the use of a transmitting antenna to send electromagnetic waves into the subsurface, which then bounce back to the surface and are detected by a receiving antenna. The transmitter and receiver antennas are typically placed on the ground surface and moved across the area of interest to create a profile of the subsurface. The GPR system consists of a control unit that powers the antennas and records the data, as well as a display unit that shows the results of the survey. The control unit sends a pulse of electromagnetic energy into the ground, which propagates through the soil or rock until it encounters a change in material or structure. When the electromagnetic wave encounters a buried object or structure, some of the energy is reflected back to the surface and detected by the receiving antenna. The GPR data is then processed using specialized software that analyzes the amplitude and travel time of the reflected waves. By interpreting the data, GPR can provide information on the depth, location, and nature of subsurface features and structures. GPR has several advantages over other geophysical survey methods, including its ability to provide high-resolution images of the subsurface and its non-invasive nature, which minimizes disruption to the site. However, the effectiveness of GPR depends on several factors, including the type of soil or rock, the depth of the features being investigated, and the frequency of the electromagnetic waves used. In environmental hazard assessments, GPR can be used to detect buried structures, such as underground storage tanks, pipelines, or utilities, which may pose a risk of contamination to the surrounding soil or groundwater. GPR can also be used to assess soil stability by identifying areas of subsurface voids or sinkholes, which can lead to the collapse of the surface. Additionally, GPR can be used to map the extent and movement of groundwater contamination, which is critical in designing effective remediation strategies. the methodology of GPR in environmental hazard assessments involves the use of electromagnetic waves to create high of the subsurface, which are then analyzed to provide information on the depth, location, and nature of subsurface features and structures. This information is critical in identifying and mitigating environmental hazards, and the non-invasive nature of GPR makes it a valuable tool in this field.Keywords: GPR, hazard, landslide, rock fall, contamination
Procedia PDF Downloads 8820054 BER Estimate of WCDMA Systems with MATLAB Simulation Model
Authors: Suyeb Ahmed Khan, Mahmood Mian
Abstract:
Simulation plays an important role during all phases of the design and engineering of communications systems, from early stages of conceptual design through the various stages of implementation, testing, and fielding of the system. In the present paper, a simulation model has been constructed for the WCDMA system in order to evaluate the performance. This model describes multiusers effects and calculation of BER (Bit Error Rate) in 3G mobile systems using Simulink MATLAB 7.1. Gaussian Approximation defines the multi-user effect on system performance. BER has been analyzed with comparison between transmitting data and receiving data.Keywords: WCDMA, simulations, BER, MATLAB
Procedia PDF Downloads 59420053 Electronic Properties Study of Ni/MgO Nanoparticles by X-Ray Photoemission Spectroscopy (XPS)
Authors: Ouafek Nora, Keghouche Nassira, Dehdouh Heider, Untidt Carlos
Abstract:
A lot of knowledge has been accumulated on the metal clusters supported on oxide surfaces because of their multiple applications in microelectronics, heterogeneous catalysis, and magnetic devices. In this work, the surface state of Ni / MgO has been studied by XPS (X-ray Photoemission Spectroscopy). The samples were prepared by impregnation with ion exchange Ni²⁺ / MgO, followed by either a thermal treatment in air (T = 100 -350 ° C) or a gamma irradiation (dose 100 kGy, 25 kGy dose rate h -1). The obtained samples are named after impregnation NMI, NMR after irradiation, and finally NMC(T) after calcination at the temperature T (T = 100-600 °C). A structural study by XRD and HRTEM reveals the presence of nanoscaled Ni-Mg intermetallic phases (Mg₂Ni, MgNi₂, and Mg₆Ni) and magnesium hydroxide. Mg(OH)₂ in nanometric range (2- 4 nm). Mg-Ni compounds are of great interest in energy fields (hydrogen storage…). XPS spectra show two Ni2p peaks at energies of about 856.1 and 861.9 eV, indicating that the nickel is primarily in an oxidized state on the surface. The shift of the main peak relative to the pure NiO (856.1 instead of 854.0 eV) suggests that in addition to oxygen, nickel is engaged in another link with magnesium. This is in agreement with the O1s spectra which present an overlap of peaks corresponds to NiO and MgO, at a calcination temperature T ≤ 300 °C.Keywords: XPS, XRD, nanoparticules, Ni-MgO
Procedia PDF Downloads 21320052 Action Potential of Lateral Geniculate Neurons at Low Threshold Currents: Simulation Study
Authors: Faris Tarlochan, Siva Mahesh Tangutooru
Abstract:
Lateral Geniculate Nucleus (LGN) is the relay center in the visual pathway as it receives most of the input information from retinal ganglion cells (RGC) and sends to visual cortex. Low threshold calcium currents (IT) at the membrane are the unique indicator to characterize this firing functionality of the LGN neurons gained by the RGC input. According to the LGN functional requirements such as functional mapping of RGC to LGN, the morphologies of the LGN neurons were developed. During the neurological disorders like glaucoma, the mapping between RGC and LGN is disconnected and hence stimulating LGN electrically using deep brain electrodes can restore the functionalities of LGN. A computational model was developed for simulating the LGN neurons with three predominant morphologies, each representing different functional mapping of RGC to LGN. The firings of action potentials at LGN neuron due to IT were characterized by varying the stimulation parameters, morphological parameters and orientation. A wide range of stimulation parameters (stimulus amplitude, duration and frequency) represents the various strengths of the electrical stimulation with different morphological parameters (soma size, dendrites size and structure). The orientation (0-1800) of LGN neuron with respect to the stimulating electrode represents the angle at which the extracellular deep brain stimulation towards LGN neuron is performed. A reduced dendrite structure was used in the model using Bush–Sejnowski algorithm to decrease the computational time while conserving its input resistance and total surface area. The major finding is that an input potential of 0.4 V is required to produce the action potential in the LGN neuron which is placed at 100 µm distance from the electrode. From this study, it can be concluded that the neuroprostheses under design would need to consider the capability of inducing at least 0.4V to produce action potentials in LGN.Keywords: Lateral Geniculate Nucleus, visual cortex, finite element, glaucoma, neuroprostheses
Procedia PDF Downloads 28120051 The Influence of Absorptive Capacity on Process Innovation: An Exploratory Study in Seven Leading and Emerging Countries
Authors: Raphael M. Rettig, Tessa C. Flatten
Abstract:
This empirical study answer calls for research on Absorptive Capacity and Process Innovation. Due to the fourth industrial revolution, manufacturing companies face the biggest disruption of their production processes since the rise of advanced manufacturing technologies in the last century. Therefore, process innovation will become a critical task to master in the future for many manufacturing firms around the world. The general ability of organizations to acquire, assimilate, transform, and exploit external knowledge, known as Absorptive Capacity, was proven to positively influence product innovation and is already conceptually associated with process innovation. The presented research provides empirical evidence for this influence. The findings are based on an empirical analysis of 732 companies from seven leading and emerging countries: Brazil, China, France, Germany, India, Japan, and the United States of America. The answers to the survey were collected in February and March 2018 and addressed senior- and top-level management with a focus on operations departments. The statistical analysis reveals the positive influence of potential and Realized Absorptive Capacity on successful process innovation taking the implementation of new digital manufacturing processes as an example. Potential Absorptive Capacity covering the acquisition and assimilation capabilities of an organization showed a significant positive influence (β = .304, p < .05) on digital manufacturing implementation success and therefore on process innovation. Realized Absorptive Capacity proved to have significant positive influence on process innovation as well (β = .461, p < .01). The presented study builds on prior conceptual work in the field of Absorptive Capacity and process innovation and contributes theoretically to ongoing research in two dimensions. First, the already conceptually associated influence of Absorptive Capacity on process innovation is backed by empirical evidence in a broad international context. Second, since Absorptive Capacity was measured with a focus on new product development, prior empirical research on Absorptive Capacity was tailored to the research and development departments of organizations. The results of this study highlight the importance of Absorptive Capacity as a capability in mechanical engineering and operations departments of organizations. The findings give managers an indication of the importance of implementing new innovative processes into their production system and fostering the right mindset of employees to identify new external knowledge. Through the ability to transform and exploit external knowledge, own production processes can be innovated successfully and therefore have a positive influence on firm performance and the competitive position of their organizations.Keywords: absorptive capacity, digital manufacturing, dynamic capabilities, process innovation
Procedia PDF Downloads 14720050 Growth of Non-Polar a-Plane AlGaN Epilayer with High Crystalline Quality and Smooth Surface Morphology
Authors: Abbas Nasir, Xiong Zhang, Sohail Ahmad, Yiping Cui
Abstract:
Non-polar a-plane AlGaN epilayers of high structural quality have been grown on r-sapphire substrate by using metalorganic chemical vapor deposition (MOCVD). A graded non-polar AlGaN buffer layer with variable aluminium concentration was used to improve the structural quality of the non-polar a-plane AlGaN epilayer. The characterisations were carried out by high-resolution X-ray diffraction (HR-XRD), atomic force microscopy (AFM) and Hall effect measurement. The XRD and AFM results demonstrate that the Al-composition-graded non-polar AlGaN buffer layer significantly improved the crystalline quality and the surface morphology of the top layer. A low root mean square roughness 1.52 nm is obtained from AFM, and relatively low background carrier concentration down to 3.9× cm-3 is obtained from Hall effect measurement.Keywords: non-polar AlGaN epilayer, Al composition-graded AlGaN layer, root mean square, background carrier concentration
Procedia PDF Downloads 14620049 Preparation of Chemically Activated Carbon from Waste Tire Char for Lead Ions Adsorption and Optimization Using Response Surface Methodology
Authors: Lucky Malise, Hilary Rutto, Tumisang Seodigeng
Abstract:
The use of tires in automobiles is very important in the automobile industry. However, there is a serious environmental problem concerning the disposal of these rubber tires once they become worn out. The main aim of this study was to prepare activated carbon from waste tire pyrolysis char by impregnating KOH on pyrolytic char. Adsorption studies on lead onto chemically activated carbon was carried out using response surface methodology. The effect of process parameters such as temperature (°C), adsorbent dosage (g/1000ml), pH, contact time (minutes) and initial lead concentration (mg/l) on the adsorption capacity were investigated. It was found that the adsorption capacity increases with an increase in contact time, pH, temperature and decreases with an increase in lead concentration. Optimization of the process variables was done using a numerical optimization method. Fourier Transform Infrared Spectra (FTIR) analysis, XRay diffraction (XRD), Thermogravimetric analysis (TGA) and scanning electron microscope was used to characterize the pyrolytic carbon char before and after activation. The optimum points 1g/ 100 ml for adsorbent dosage, 7 for pH value of the solution, 115.2 min for contact time, 100 mg/l for initial metal concentration, and 25°C for temperature were obtained to achieve the highest adsorption capacity of 93.176 mg/g with a desirability of 0.994. Fourier Transform Infrared Spectra (FTIR) analysis and Thermogravimetric analysis (TGA) show the presence of oxygen-containing functional groups on the surface of the activated carbon produced and that the weight loss taking place during the activation step is small.Keywords: waste tire pyrolysis char, chemical activation, central composite design (CCD), adsorption capacity, numerical optimization
Procedia PDF Downloads 22920048 Screening of Minimal Salt Media for Biosurfactant Production by Bacillus spp.
Authors: Y. M. Al-Wahaibi, S. N. Al-Bahry, A. E. Elshafie, A. S. Al-Bemani, S. J. Joshi, A. K. Al-Bahri
Abstract:
Crude oil is a major source of global energy. The major problem is its widespread use and demand resulted is in increasing environmental pollution. One associated pollution problem is ‘oil spills’. Oil spills can be remediated with the use of chemical dispersants, microbial biodegradation and microbial metabolites such as biosurfactants. Four different minimal salt media for biosurfactant production by Bacillus isolated from oil contaminated sites from Oman were screened. These minimal salt media were supplemented with either glucose or sucrose as a carbon source. Among the isolates, W16 and B30 produced the most active biosurfactants. Isolate W16 produced better biosurfactant than the rest, and reduced surface tension (ST) and interfacial tension (IFT) to 25.26mN/m and 2.29mN/m respectively within 48h which are characteristics for removal of oil in contaminated sites. Biosurfactant was produced in bulk and extracted using acid precipitation method. Thin Layer Chromatography (TLC) of acid precipitate biosurfactant revealed two concentrated bands. Further studies of W16 biosurfactant in bioremediation of oil spills are recommended.Keywords: oil contamination, remediation, Bacillus spp, biosurfactant, surface tension, interfacial tension
Procedia PDF Downloads 28020047 Deconstructing Local Area Networks Using MaatPeace
Authors: Gerald Todd
Abstract:
Recent advances in random epistemologies and ubiquitous theory have paved the way for web services. Given the current status of linear-time communication, cyberinformaticians compellingly desire the exploration of link-level acknowledgements. In order to realize this purpose, we concentrate our efforts on disconfirming that DHTs and model checking are mostly incompatible.Keywords: LAN, cyberinformatics, model checking, communication
Procedia PDF Downloads 40320046 Effects of Upstream Wall Roughness on Separated Turbulent Flow over a Forward Facing Step in an Open Channel
Authors: S. M. Rifat, André L. Marchildon, Mark F. Tachie
Abstract:
The effect of upstream surface roughness over a smooth forward facing step in an open channel was investigated using a particle image velocimetry technique. Three different upstream surface topographies consisting of hydraulically smooth wall, sandpaper 36 grit and sand grains were examined. Besides the wall roughness conditions, all other upstream flow characteristics were kept constant. It was also observed that upstream roughness decreased the approach velocity by 2% and 10% but increased the turbulence intensity by 14% and 35% at the wall-normal distance corresponding to the top plane of the step compared to smooth upstream. The results showed that roughness decreased the reattachment lengths by 14% and 30% compared to smooth upstream. Although the magnitudes of maximum positive and negative Reynolds shear stress in separated and reattached region were 0.02Ue for all the cases, the physical size of both the maximum and minimum contour levels were decreased by increasing upstream roughness.Keywords: forward facing step, open channel, separated and reattached turbulent flows, wall roughness
Procedia PDF Downloads 35620045 Synthesis, Characterization and Coating of the Zinc Oxide Nanoparticles on Cotton Fabric by Mechanical Thermo-Fixation Techniques to Impart Antimicrobial Activity
Authors: Imana Shahrin Tania, Mohammad Ali
Abstract:
The present study reports the synthesis, characterization and application of nano-sized zinc-oxide (ZnO) particles on a cotton fabric surface. The aim of the investigations is to impart the antimicrobial activity on textile cloth. Nanoparticle is synthesized by wet chemical method from zinc sulphate and sodium hydroxide. SEM (scanning electron micrograph) images are taken to demonstrate the surface morphology of nanoparticles. XRD analysis is done to determine the crystal size of the nanoparticle. With the conformation of nanoformation, the cotton woven fabric is treated with ZnO nanoparticle by mechanical thermo-fixation (pad-dry-cure) technique. To increase the wash durability of nano treated fabric, an acrylic binder is used as a fixing agent. The treated fabric shows up to 90% bacterial reduction for S. aureus (Staphylococcus aureus) and 87% for E. coli (Escherichia coli) which is appreciable for bacteria protective clothing.Keywords: nanoparticle, zinc oxide, cotton fabric, antibacterial activity, binder
Procedia PDF Downloads 13720044 Surface Sunctionalization Strategies for the Design of Thermoplastic Microfluidic Devices for New Analytical Diagnostics
Authors: Camille Perréard, Yoann Ladner, Fanny D'Orlyé, Stéphanie Descroix, Vélan Taniga, Anne Varenne, Cédric Guyon, Michael. Tatoulian, Frédéric Kanoufi, Cyrine Slim, Sophie Griveau, Fethi Bedioui
Abstract:
The development of micro total analysis systems is of major interest for contaminant and biomarker analysis. As a lab-on-chip integrates all steps of an analysis procedure in a single device, analysis can be performed in an automated format with reduced time and cost, while maintaining performances comparable to those of conventional chromatographic systems. Moreover, these miniaturized systems are either compatible with field work or glovebox manipulations. This work is aimed at developing an analytical microsystem for trace and ultra trace quantitation in complex matrices. The strategy consists in the integration of a sample pretreatment step within the lab-on-chip by a confinement zone where selective ligands are immobilized for target extraction and preconcentration. Aptamers were chosen as selective ligands, because of their high affinity for all types of targets (from small ions to viruses and cells) and their ease of synthesis and functionalization. This integrated target extraction and concentration step will be followed in the microdevice by an electrokinetic separation step and an on-line detection. Polymers consisting of cyclic olefin copolymer (COC) or fluoropolymer (Dyneon THV) were selected as they are easy to mold, transparent in UV-visible and have high resistance towards solvents and extreme pH conditions. However, because of their low chemical reactivity, surface treatments are necessary. For the design of this miniaturized diagnostics, we aimed at modifying the microfluidic system at two scales : (1) on the entire surface of the microsystem to control the surface hydrophobicity (so as to avoid any sample wall adsorption) and the fluid flows during electrokinetic separation, or (2) locally so as to immobilize selective ligands (aptamers) on restricted areas for target extraction and preconcentration. We developed different novel strategies for the surface functionalization of COC and Dyneon, based on plasma, chemical and /or electrochemical approaches. In a first approach, a plasma-induced immobilization of brominated derivatives was performed on the entire surface. Further substitution of the bromine by an azide functional group led to covalent immobilization of ligands through “click” chemistry reaction between azides and terminal alkynes. COC and Dyneon materials were characterized at each step of the surface functionalization procedure by various complementary techniques to evaluate the quality and homogeneity of the functionalization (contact angle, XPS, ATR). With the objective of local (micrometric scale) aptamer immobilization, we developed an original electrochemical strategy on engraved Dyneon THV microchannel. Through local electrochemical carbonization followed by adsorption of azide-bearing diazonium moieties and covalent linkage of alkyne-bearing aptamers through click chemistry reaction, typical dimensions of immobilization zones reached the 50 µm range. Other functionalization strategies, such as sol-gel encapsulation of aptamers, are currently investigated and may also be suitable for the development of the analytical microdevice. The development of these functionalization strategies is the first crucial step in the design of the entire microdevice. These strategies allow the grafting of a large number of molecules for the development of new analytical tools in various domains like environment or healthcare.Keywords: alkyne-azide click chemistry (CuAAC), electrochemical modification, microsystem, plasma bromination, surface functionalization, thermoplastic polymers
Procedia PDF Downloads 44420043 Computational Simulations on Stability of Model Predictive Control for Linear Discrete-Time Stochastic Systems
Authors: Tomoaki Hashimoto
Abstract:
Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial time and a moving terminal time. This paper examines the stability of model predictive control for linear discrete-time systems with additive stochastic disturbances. A sufficient condition for the stability of the closed-loop system with model predictive control is derived by means of a linear matrix inequality. The objective of this paper is to show the results of computational simulations in order to verify the validity of the obtained stability condition.Keywords: computational simulations, optimal control, predictive control, stochastic systems, discrete-time systems
Procedia PDF Downloads 436