Search results for: Fourier neural operator
1685 Polymerization: An Alternative Technology for Heavy Metal Removal
Authors: M. S. Mahmoud
Abstract:
In this paper, the adsorption performance of a novel environmental friendly material, calcium alginate gel beads as a non-conventional technique for the successful removal of copper ions from aqueous solution are reported on. Batch equilibrium studies were carried out to evaluate the adsorption capacity and process parameters such as pH, adsorbent dosages, initial metal ion concentrations, stirring rates and contact times. It was observed that the optimum pH for maximum copper ions adsorption was at pH 5.0. For all contact times, an increase in copper ions concentration resulted in decrease in the percent of copper ions removal. Langmuir and Freundlich's isothermal models were used to describe the experimental adsorption. Adsorbent was characterization using Fourier transform-infrared (FT-IR) spectroscopy and Transmission electron microscopy (TEM).Keywords: adsorption, alginate polymer, isothermal models, equilibrium
Procedia PDF Downloads 4481684 Synthesis and Characterization of Molecularly Imprinted Polymer as a New Adsorbent for the Removal of Pyridine from Organic Medium
Authors: Opeyemi Elujulo, Aderonke Okoya, Kehinde Awokoya
Abstract:
Molecularly imprinted polymers (MIP) for the adsorption of pyridine (PYD) was obtained from PYD (the template), styrene (the functional monomer), divinyl benzene (the crosslinker), benzoyl peroxide (the initiator), and water (the porogen). When the template was removed by solvent extraction, imprinted binding sites were left in the polymer material that are capable of selectively rebinding the target molecule. The material was characterized by Fourier transform infrared spectroscopy and differential scanning calorimetry. Batch adsorption experiments were performed to study the adsorption of the material in terms of adsorption kinetics, isotherms, and thermodynamic parameters. The results showed that the imprinted polymer exhibited higher affinity for PYD compared to non-imprinted polymer (NIP).Keywords: molecularly imprinted polymer, bulk polymerization, environmental pollutant, adsorption
Procedia PDF Downloads 1421683 The Effect of the Reaction Time on the Microwave Synthesis of Magnesium Borates from MgCl2.6H2O, MgO and H3BO3
Authors: E. Moroydor Derun, P. Gurses, M. Yildirim, A. S. Kipcak, T. Ibroska, S. Piskin
Abstract:
Due to their strong mechanical and thermal properties magnesium borates have a wide usage area such as ceramic industry, detergent production, friction reducing additive and grease production. In this study, microwave synthesis of magnesium borates from MgCl2.6H2O (Magnesium chloride hexahydrate), MgO (Magnesium oxide) and H3BO3 (Boric acid) for different reaction times is researched. X-ray Diffraction (XRD) and Fourier Transform Infrared (FT-IR) Spectroscopy are used to find out how the reaction time sways on the products. The superficial properties are investigated with Scanning Electron Microscopy (SEM). According to XRD analysis, the synthesized compounds are 00-041-1407 pdf coded Shabinite (Mg5(BO3)4Cl2(OH)5.4(H2O)) and 01-073-2158 pdf coded Karlite (Mg7(BO3)3(OH,Cl)5).Keywords: magnesium borate, microwave synthesis, XRD, SEM
Procedia PDF Downloads 3491682 Prediction of Coronary Artery Stenosis Severity Based on Machine Learning Algorithms
Authors: Yu-Jia Jian, Emily Chia-Yu Su, Hui-Ling Hsu, Jian-Jhih Chen
Abstract:
Coronary artery is the major supplier of myocardial blood flow. When fat and cholesterol are deposit in the coronary arterial wall, narrowing and stenosis of the artery occurs, which may lead to myocardial ischemia and eventually infarction. According to the World Health Organization (WHO), estimated 740 million people have died of coronary heart disease in 2015. According to Statistics from Ministry of Health and Welfare in Taiwan, heart disease (except for hypertensive diseases) ranked the second among the top 10 causes of death from 2013 to 2016, and it still shows a growing trend. According to American Heart Association (AHA), the risk factors for coronary heart disease including: age (> 65 years), sex (men to women with 2:1 ratio), obesity, diabetes, hypertension, hyperlipidemia, smoking, family history, lack of exercise and more. We have collected a dataset of 421 patients from a hospital located in northern Taiwan who received coronary computed tomography (CT) angiography. There were 300 males (71.26%) and 121 females (28.74%), with age ranging from 24 to 92 years, and a mean age of 56.3 years. Prior to coronary CT angiography, basic data of the patients, including age, gender, obesity index (BMI), diastolic blood pressure, systolic blood pressure, diabetes, hypertension, hyperlipidemia, smoking, family history of coronary heart disease and exercise habits, were collected and used as input variables. The output variable of the prediction module is the degree of coronary artery stenosis. The output variable of the prediction module is the narrow constriction of the coronary artery. In this study, the dataset was randomly divided into 80% as training set and 20% as test set. Four machine learning algorithms, including logistic regression, stepwise regression, neural network and decision tree, were incorporated to generate prediction results. We used area under curve (AUC) / accuracy (Acc.) to compare the four models, the best model is neural network, followed by stepwise logistic regression, decision tree, and logistic regression, with 0.68 / 79 %, 0.68 / 74%, 0.65 / 78%, and 0.65 / 74%, respectively. Sensitivity of neural network was 27.3%, specificity was 90.8%, stepwise Logistic regression sensitivity was 18.2%, specificity was 92.3%, decision tree sensitivity was 13.6%, specificity was 100%, logistic regression sensitivity was 27.3%, specificity 89.2%. From the result of this study, we hope to improve the accuracy by improving the module parameters or other methods in the future and we hope to solve the problem of low sensitivity by adjusting the imbalanced proportion of positive and negative data.Keywords: decision support, computed tomography, coronary artery, machine learning
Procedia PDF Downloads 2291681 Role of Artificial Intelligence in Nano Proteomics
Authors: Mehrnaz Mostafavi
Abstract:
Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence
Procedia PDF Downloads 961680 Detection of Epinephrine in Chicken Serum at Iron Oxide Screen Print Modified Electrode
Authors: Oluwole Opeyemi Dina, Saheed E. Elugoke, Peter Olutope Fayemi, Omolola E. Fayemi
Abstract:
This study presents the detection of epinephrine (EP) at Fe₃O₄ modified screen printed silver electrode (SPSE). The iron oxide (Fe₃O₄) nanoparticles were characterized with UV-visible spectroscopy, Fourier-Transform infrared spectroscopy (FT-IR) and Scanning electron microscopy (SEM) prior to the modification of the SPSE. The EP oxidation peak current (Iap) increased with an increase in the concentration of EP as well as the scan rate (from 25 - 400 mVs⁻¹). Using cyclic voltammetry (CV), the relationship between Iap and EP concentration was linear over a range of 3.8 -118.9 µM and 118.9-175 µM with a detection limit of 41.99 µM and 83.16 µM, respectively. Selective detection of EP in the presence of ascorbic acid was also achieved at this electrode.Keywords: screenprint electrode, iron oxide nanoparticle, epinephrine, serum, cyclic voltametry
Procedia PDF Downloads 1651679 Application of Sensory Thermography on Workers of a Wireless Industry in Mexico
Authors: Claudia Camargo Wilson, Enrique Javier de la Vega Bustillos, Jesús Everardo Olguín Tiznado, Juan Andrés López Barreras, Sandra K. Enriquez
Abstract:
This study focuses on the application of sensory thermography, as a non-invasive method to evaluate the musculoskeletal injuries that industry workers performing Highly Repetitive Movements (HRM) may acquire. It was made at a wireless company having the target of analyze temperatures in worker’s wrists, elbows and shoulders in workstations during their activities, this thru sensorial thermography with the goal of detecting maximum temperatures (Tmax) that could indicate possible injuries. The tests were applied during 3 hours for only 2 workers that work in workstations where there’s been the highest index of injuries and accidents. We were made comparisons for each part of the body that were study for both because of the similitude between the activities of the workstations; they were requiring both an immediate evaluation. The Tmax was recorder during the test of the worker 2, in the left wrist, reaching a temperature of 35.088ºC and with a maximum increase of 1.856°C.Keywords: thermography, maximum temperaturas (Tmax), highly repetitive movements (HRM), operator
Procedia PDF Downloads 4031678 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI
Authors: Ananya Ananya, Karthik Rao
Abstract:
Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net
Procedia PDF Downloads 2611677 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 1771676 Optimizing Load Shedding Schedule Problem Based on Harmony Search
Authors: Almahd Alshereef, Ahmed Alkilany, Hammad Said, Azuraliza Abu Bakar
Abstract:
From time to time, electrical power grid is directed by the National Electricity Operator to conduct load shedding, which involves hours' power outages on the area of this study, Southern Electrical Grid of Libya (SEGL). Load shedding is conducted in order to alleviate pressure on the National Electricity Grid at times of peak demand. This approach has chosen a set of categories to study load-shedding problem considering the effect of the demand priorities on the operation of the power system during emergencies. Classification of category region for load shedding problem is solved by a new algorithm (the harmony algorithm) based on the "random generation list of category region", which is a possible solution with a proximity degree to the optimum. The obtained results prove additional enhancements compared to other heuristic approaches. The case studies are carried out on SEGL.Keywords: optimization, harmony algorithm, load shedding, classification
Procedia PDF Downloads 3971675 Development of 3D Printed, Conductive, Biodegradable Nerve Conduits for Neural Regeneration
Authors: Wei-Chia Huang, Jane Wang
Abstract:
Damage to nerves is considered one of the most irreversible injuries. The regeneration of nerves has always been an important topic in regenerative medicine. In general, damage to human tissue will naturally repair overtime. However, when the nerves are damaged, healed flesh wound cannot guarantee full restoration to its original function, as truncated nerves are often irreversible. Therefore, the development of treatment methods to successfully guide and accelerate the regeneration of nerves has been highly sought after. In order to induce nerve tissue growth, nerve conduits are commonly used to help reconnect broken nerve bundles to provide protection to the location of the fracture while guiding the growth of the nerve bundles. To prevent the protected tissue from becoming necrotic and to ensure the growth rate, the conduits used are often modified with microstructures or blended with neuron growth factors that may facilitate nerve regeneration. Electrical stimulation is another attempted treatment for medical rehabilitation. With appropriate range of voltages and stimulation frequencies, it has been demonstrated to promote cell proliferation and migration. Biodegradability are critical for medical devices like nerve conduits, while conductive polymers pose great potential toward the differentiation and growth of nerve cells. In this work, biodegradability and conductivity were combined into a novel biodegradable, photocurable, conductive polymer composite materials by embedding conductive nanoparticles in poly(glycerol sebacate) acrylate (PGSA) and 3D-printed into nerve conduits. Rat pheochromocytoma cells and rat neuronal Schwann cells were chosen for the in vitro tests of the conduits and had demonstrate selective growth upon culture in the conductive conduits with built-in microchannels and electrical stimulation.Keywords: biodegradable polymer, 3d printing, neural regeneration, electrical stimulation
Procedia PDF Downloads 1041674 Effect of Electromagnetic Fields on Protein Extraction from Shrimp By-Products for Electrospinning Process
Authors: Guido Trautmann-Sáez, Mario Pérez-Won, Vilbett Briones, María José Bugueño, Gipsy Tabilo-Munizaga, Luis Gonzáles-Cavieres
Abstract:
Shrimp by-products are a valuable source of protein. However, traditional protein extraction methods have limitations in terms of their efficiency. Protein extraction from shrimp (Pleuroncodes monodon) industrial by-products assisted with ohmic heating (OH), microwave (MW) and pulsed electric field (PEF). It was performed by chemical method (using NaOH and HCl 2M) assisted with OH, MW and PEF in a continuous flow system (5 ml/s). Protein determination, differential scanning calorimetry (DSC) and Fourier-transform infrared (FTIR). Results indicate a 19.25% (PEF) 3.65% (OH) and 28.19% (MW) improvement in protein extraction efficiency. The most efficient method was selected for the electrospinning process and obtaining fiber.Keywords: electrospinning process, emerging technology, protein extraction, shrimp by-products
Procedia PDF Downloads 901673 Recent Developments in the Application of Deep Learning to Stock Market Prediction
Authors: Shraddha Jain Sharma, Ratnalata Gupta
Abstract:
Predicting stock movements in the financial market is both difficult and rewarding. Analysts and academics are increasingly using advanced approaches such as machine learning techniques to anticipate stock price patterns, thanks to the expanding capacity of computing and the recent advent of graphics processing units and tensor processing units. Stock market prediction is a type of time series prediction that is incredibly difficult to do since stock prices are influenced by a variety of financial, socioeconomic, and political factors. Furthermore, even minor mistakes in stock market price forecasts can result in significant losses for companies that employ the findings of stock market price prediction for financial analysis and investment. Soft computing techniques are increasingly being employed for stock market prediction due to their better accuracy than traditional statistical methodologies. The proposed research looks at the need for soft computing techniques in stock market prediction, the numerous soft computing approaches that are important to the field, past work in the area with their prominent features, and the significant problems or issue domain that the area involves. For constructing a predictive model, the major focus is on neural networks and fuzzy logic. The stock market is extremely unpredictable, and it is unquestionably tough to correctly predict based on certain characteristics. This study provides a complete overview of the numerous strategies investigated for high accuracy prediction, with a focus on the most important characteristics.Keywords: stock market prediction, artificial intelligence, artificial neural networks, fuzzy logic, accuracy, deep learning, machine learning, stock price, trading volume
Procedia PDF Downloads 901672 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach
Authors: Gong Zhilin, Jing Yang, Jian Yin
Abstract:
The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).Keywords: credit card, data mining, fraud detection, money transactions
Procedia PDF Downloads 1311671 Series "H154M" as a Unit Area of the Region between the Lines and Curves
Authors: Hisyam Hidayatullah
Abstract:
This world events consciously or not realize everything has a pattern, until the events of the universe according to the Big Bang theory of the solar system which makes so regular in the rotation. The author would like to create a results curve area between the quadratic function y=kx2 and line y=ka2 using GeoGebra application version 4.2. This paper can provide a series that is no less interesting with Fourier series, so that will add new material about the series can be calculated with sigma notation. In addition, the ranks of the unique natural numbers of extensive changes in established areas. Finally, this paper provides analytical and geometric proof of the vast area in between the lines and curves that give the area is formed by y=ka2 dan kurva y=kx2, x-axis, line x=√a and x=-√a make a series of numbers for k=1 and a ∈ original numbers. ∑_(i=0)^n=(4n√n)/3=0+4/3+(8√2)/3+4√3+⋯+(4n√n)/3. The author calls the series “H154M”.Keywords: sequence, series, sigma notation, application GeoGebra
Procedia PDF Downloads 3761670 Electrical and Optical Properties of Polyaniline: Cadmium Sulphide Quantum Dots Nanocomposites
Authors: Akhtar Rasool, Tasneem Zahra Rizvi
Abstract:
In this study, a series of the cadmium sulphide quantum dots/polyaniline nanocomposites with varying compositions were prepared by in-situ polymerization technique and were characterized using X-ray diffraction and Fourier transform infrared spectroscopy. The surface morphology was studied by scanning electron microscopy. UV-Visible spectroscopy was used to find out the energy band gap of the nanoparticles and the nanocomposites. Temperature dependence of DC electrical conductivity and temperature and frequency dependence of AC conductivity were investigated to study the charge transport mechanism in the nanocomposites. DC conductivity was found to be a typical for a semiconducting behavior following Mott’s 1D variable range hoping model. The frequency dependent AC conductivity followed the universal power law.Keywords: conducting polymers, nanocomposites, polyaniline composites, quantum dots
Procedia PDF Downloads 2541669 Improvement of Overall Equipment Effectiveness of Load Haul Dump Machines in Underground Coal Mines
Authors: J. BalaRaju, M. Govinda Raj, C. S. N. Murthy
Abstract:
Every organization in the competitive world tends to improve its economy by increasing their production and productivity rates. Unequivocally, the production in Indian underground mines over the years is not satisfactory, due to a variety of reasons. There are manifold of avenues for the betterment of production, and one such approach is through enhanced utilization of mechanized equipment such as Load Haul Dumper (LHD). This is used as loading and hauling purpose in underground mines. In view of the aforementioned facts, this paper delves into identification of the key influencing factors such as LHDs maintenance effectiveness, vehicle condition, operator skill and utilization of the machines on performance of LHDs. An attempt has been made for improvement of performance of the equipment through evaluation of Overall Equipment Effectiveness (OEE). Two different approaches for evaluation of OEE have been adopted and compared under various operating conditions. The use of OEE calculation in terms of percentage availability, performance and quality and the hitherto existing situation of the underground mine production is evaluated. Necessary recommendations are suggested to mining industry on the basis of OEE.Keywords: utilization, maintenance, availability, performance and quality
Procedia PDF Downloads 2221668 Group Decision Making through Interval-Valued Intuitionistic Fuzzy Soft Set TOPSIS Method Using New Hybrid Score Function
Authors: Syed Talib Abbas Raza, Tahseen Ahmed Jilani, Saleem Abdullah
Abstract:
This paper presents interval-valued intuitionistic fuzzy soft sets based TOPSIS method for group decision making. The interval-valued intuitionistic fuzzy soft set is a mutation of an interval-valued intuitionistic fuzzy set and soft set. In group decision making problems IVIFSS makes the process much more algebraically elegant. We have used weighted arithmetic averaging operator for aggregating the information and define a new Hybrid Score Function as metric tool for comparison between interval-valued intuitionistic fuzzy values. In an illustrative example we have applied the developed method to a criminological problem. We have developed a group decision making model for integrating the imprecise and hesitant evaluations of multiple law enforcement agencies working on target killing cases in the country.Keywords: group decision making, interval-valued intuitionistic fuzzy soft set, TOPSIS, score function, criminology
Procedia PDF Downloads 6041667 Probabilistic Simulation of Triaxial Undrained Cyclic Behavior of Soils
Authors: Arezoo Sadrinezhad, Kallol Sett, S. I. Hariharan
Abstract:
In this paper, a probabilistic framework based on Fokker-Planck-Kolmogorov (FPK) approach has been applied to simulate triaxial cyclic constitutive behavior of uncertain soils. The framework builds upon previous work of the writers, and it has been extended for cyclic probabilistic simulation of triaxial undrained behavior of soils. von Mises elastic-perfectly plastic material model is considered. It is shown that by using probabilistic framework, some of the most important aspects of soil behavior under cyclic loading can be captured even with a simple elastic-perfectly plastic constitutive model.Keywords: elasto-plasticity, uncertainty, soils, fokker-planck equation, fourier spectral method, finite difference method
Procedia PDF Downloads 3791666 Advancements in Autonomous Drones for Enhanced Healthcare Logistics
Authors: Bhaargav Gupta P., Vignesh N., Nithish Kumar R., Rahul J., Nivetha Ruvah D.
Abstract:
Delivering essential medical supplies to rural and underserved areas is challenging due to infrastructure limitations and logistical barriers, often resulting in inefficiencies and delays. Traditional delivery methods are hindered by poor road networks, long distances, and difficult terrains, compromising timely access to vital resources, especially in emergencies. This paper introduces an autonomous drone system engineered to optimize last-mile delivery. By utilizing advanced navigation and object-detection algorithms, such as region-based convolutional neural networks (R-CNN), our drones efficiently avoid obstacles, identify safe landing zones, and adapt dynamically to varying environments. Equipped with high-precision GPS and autonomous capabilities, the drones effectively navigate complex, remote areas with minimal dependence on established infrastructure. The system includes a dedicated mobile application for secure order placement and real-time tracking, and a secure payload box with OTP verification ensures tamper-resistant delivery to authorized recipients. This project demonstrates the potential of automated drone technology in healthcare logistics, offering a scalable and eco-friendly approach to enhance accessibility and service delivery in underserved regions. By addressing logistical gaps through advanced automation, this system represents a significant advancement toward sustainable, accessible healthcare in remote areas.Keywords: region-based convolutional neural network, one time password, global positioning system, autonomous drones, healthcare logistics
Procedia PDF Downloads 91665 A Fault Analysis Cracked-Rotor-to-Stator Rub and Unbalance by Vibration Analysis Technique
Authors: B. X. Tchomeni, A. A. Alugongo, L. M. Masu
Abstract:
An analytical 4-DOF nonlinear model of a de Laval rotor-stator system based on Energy Principles has been used theoretically and experimentally to investigate fault symptoms in a rotating system. The faults, namely rotor-stator-rub, crack and unbalance are modelled as excitations on the rotor shaft. Mayes steering function is used to simulate the breathing behaviour of the crack. The fault analysis technique is based on waveform signal, orbits and Fast Fourier Transform (FFT) derived from simulated and real measured signals. Simulated and experimental results manifest considerable mutual resemblance of elliptic-shaped orbits and FFT for a same range of test data.Keywords: a breathing crack, fault, FFT, nonlinear, orbit, rotor-stator rub, vibration analysis
Procedia PDF Downloads 3081664 Quantification of Site Nonlinearity Based on HHT Analysis of Seismic Recordings
Authors: Ruichong Zhang
Abstract:
This study proposes a recording-based approach to characterize and quantify earthquake-induced site nonlinearity, exemplified as soil nonlinearity and/or liquefaction. Alternative to Fourier spectral analysis (FSA), the paper introduces time-frequency analysis of earthquake ground motion recordings with the aid of so-called Hilbert-Huang transform (HHT), and offers justification for the HHT in addressing the nonlinear features shown in the recordings. With the use of the 2001 Nisqually earthquake recordings, this study shows that the proposed approach is effective in characterizing site nonlinearity and quantifying the influences in seismic ground responses.Keywords: site nonlinearity, site amplification, site damping, Hilbert-Huang Transform (HHT), liquefaction, 2001 Nisqually Earthquake
Procedia PDF Downloads 4871663 Pretreatment of Cattail (Typha domingensis) Fibers to Obtain Cellulose Nanocrystals
Authors: Marivane Turim Koschevic, Maycon dos Santos, Marcello Lima Bertuci, Farayde Matta Fakhouri, Silvia Maria Martelli
Abstract:
Natural fibers are rich raw materials in cellulose and abundant in the world, its use for the cellulose nanocrystals extraction is promising as an example cited is the cattail, macrophyte native weed in South America. This study deals with the pre-treatment cattail of crushed fibers, at six different methods of mercerization, followed by the use of bleaching. As a result, have found The positive effects of treating fibers by means of optical microscopy and spectroscopy, Fourier transform (FTIR). The sample selected for future testing of cellulose nanocrystals extraction was treated in 2.5% NaOH for 2 h, 60 °C in the first stage and 30vol H2O2, NaOH 5% in the proportion 30/70% (v/v) for 1 hour 60 °C, followed by treatment at 50/50% (v/v) 15 minutes, 50°C, with the same constituents of the solution.Keywords: cellulose nanocrystal, chemical treatment, mercerization, natural fibers
Procedia PDF Downloads 2931662 Dynamics of Light Induced Current in 1D Coupled Quantum Dots
Authors: Tokuei Sako
Abstract:
Laser-induced current in a quasi-one-dimensional nanostructure has been studied by a model of a few electrons confined in a 1D electrostatic potential coupled to electrodes at both ends and subjected to a pulsed laser field. The time-propagation of the one- and two-electron wave packets has been calculated by integrating the time-dependent Schrödinger equation directly by the symplectic integrator method with uniform Fourier grid. The temporal behavior of the resultant light-induced current in the studied systems has been discussed with respect to the lifetime of the quasi-bound states formed when the static bias voltage is applied.Keywords: pulsed laser field, nanowire, electron wave packet, quantum dots, time-dependent Schrödinger equation
Procedia PDF Downloads 3571661 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 641660 Magnetic Nanoparticles for Protein C Purification
Authors: Duygu Çimen, Nilay Bereli, Adil Denizli
Abstract:
In this study is to synthesis magnetic nanoparticles for purify protein C. For this aim, N-Methacryloyl-(L)-histidine methyl ester (MAH) containing 2-hydroxyethyl methacrylate (HEMA) based magnetic nanoparticles were synthesized by using micro-emulsion polymerization technique for templating protein C via metal chelation. The obtained nanoparticles were characterized with Fourier transform infrared spectroscopy (FTIR), transmission electron microscopy (TEM), zeta-size analysis and electron spin resonance (ESR) spectroscopy. After that, they were used for protein C purification from aqueous solution to evaluate/optimize the adsorption condition. Hereby, the effecting factors such as concentration, pH, ionic strength, temperature, and reusability were evaluated. As the last step, protein C was determined with sodium dodecyl sulfate-polyacrylamide gel electrophoresis.Keywords: immobilized metal affinity chromatography (IMAC), magnetic nanoparticle, protein C, hydroxyethyl methacrylate (HEMA)
Procedia PDF Downloads 4251659 Regenerated Cellulose Prepared by Using NaOH/Urea
Authors: Lee Chiau Yeng, Norhayani Othman
Abstract:
Regenerated cellulose fiber is fabricated in the NaOH/urea aqueous solution. In this work, cellulose is dissolved in 7 .wt% NaOH/12 .wt% urea in the temperature of -12 °C to prepare regenerated cellulose. Thermal and structure properties of cellulose and regenerated cellulose was compared and investigated by Field Emission Scanning Electron Microscopy (FeSEM), Fourier Transform Infrared Spectroscopy (FTIR), X-Ray Diffraction (XRD), Thermogravimetric analysis (TGA), and Differential Scanning Calorimetry. Results of FeSEM revealed that the regenerated cellulose fibers showed a more circular shape with irregular size due to fiber agglomeration. FTIR showed the difference in between the structure of cellulose and the regenerated cellulose fibers. In this case, regenerated cellulose fibers have a cellulose II crystalline structure with lower degree of crystallinity. Regenerated cellulose exhibited better thermal stability than the cellulose.Keywords: regenerated cellulose, cellulose, NaOH, urea
Procedia PDF Downloads 4311658 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 1141657 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue
Authors: Rachel Y. Zhang, Christopher K. Anderson
Abstract:
A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine
Procedia PDF Downloads 1331656 Optimal Load Control Strategy in the Presence of Stochastically Dependent Renewable Energy Sources
Authors: Mahmoud M. Othman, Almoataz Y. Abdelaziz, Yasser G. Hegazy
Abstract:
This paper presents a load control strategy based on modification of the Big Bang Big Crunch optimization method. The proposed strategy aims to determine the optimal load to be controlled and the corresponding time of control in order to minimize the energy purchased from substation. The presented strategy helps the distribution network operator to rely on the renewable energy sources in supplying the system demand. The renewable energy sources used in the presented study are modeled using the diagonal band Copula method and sequential Monte Carlo method in order to accurately consider the multivariate stochastic dependence between wind power, photovoltaic power and the system demand. The proposed algorithms are implemented in MATLAB environment and tested on the IEEE 37-node feeder. Several case studies are done and the subsequent discussions show the effectiveness of the proposed algorithm.Keywords: big bang big crunch, distributed generation, load control, optimization, planning
Procedia PDF Downloads 345