Search results for: perceptual linear prediction (PLP’s)
3827 The Effect of Linear Low-Density Polyethylene Cross-Contamination by Other Plastic Types on Bitumen Modification
Authors: Nioushasadat Haji Seyed Javadi, Ailar Hajimohammadi, Nasser Khalili
Abstract:
Currently, the recycling of plastic wastes has been the subject of much research attention, especially in pavement constructions, where virgin polymers can be replaced by recycled plastics for asphalt binder modification. Among the plastic types, recycled linear low-density polyethylene (RLLDPE) has been one of the common and largely available plastics for bitumen modification. However, it is important to note that during the recycling process, LLDPE can easily be contaminated with other plastic types, especially with low-density polyethylene (LDPE), high-density polyethylene (HDPE), and polypropylene (PP). The cross-contamination of LLDPE with other plastics lowers its quality and, consequently, can affect the asphalt modification process. This study aims to assess the effect of LLDPE cross-contamination on bitumen modification. To do so, samples of bitumen modified with LLDPE and blends of LLDPE with LDPE, HDPE, and PP were prepared and compared through physical and rheological evaluations. The experimental tests, including softening point, penetration, viscosity at 135 °C, and dynamic shear rheometer, were conducted. The results indicated that the effect of cross-contamination on softening point and rutting resistance was negligible. On the other side, penetration and viscosity were highly impacted. The results also showed that among contamination of LLDPE with the other plastic types, PP had the highest influence in comparison with HDPE and LDPE on changing the properties of the LLDPE- modified bitumen.Keywords: recycled polyethylene, polymer cross-contamination, waste plastic, bitumen, rutting resistance
Procedia PDF Downloads 1273826 2D Numerical Modeling of Ultrasonic Measurements in Concrete: Wave Propagation in a Multiple-Scattering Medium
Authors: T. Yu, L. Audibert, J. F. Chaix, D. Komatitsch, V. Garnier, J. M. Henault
Abstract:
Linear Ultrasonic Techniques play a major role in Non-Destructive Evaluation (NDE) for civil engineering structures in concrete since they can meet operational requirements. Interpretation of ultrasonic measurements could be improved by a better understanding of ultrasonic wave propagation in a multiple scattering medium. This work aims to develop a 2D numerical model of ultrasonic wave propagation in a heterogeneous medium, like concrete, integrating the multiple scattering phenomena in SPECFEM software. The coherent field of multiple scattering is obtained by averaging numerical wave fields, and it is used to determine the effective phase velocity and attenuation corresponding to an equivalent homogeneous medium. First, this model is applied to one scattering element (a cylinder) in a homogenous medium in a linear-elastic system, and its validation is completed thanks to the comparison with analytical solution. Then, some cases of multiple scattering by a set of randomly located cylinders or polygons are simulated to perform parametric studies on the influence of frequency and scatterer size, concentration, and shape. Also, the effective properties are compared with the predictions of Waterman-Truell model to verify its validity. Finally, the mortar viscoelastic behavior is introduced in the simulation in order to considerer the dispersion and the attenuation due to porosity included in the cement paste. In the future, different steps will be developed: The comparisons with experimental results, the interpretation of NDE measurements, and the optimization of NDE parameters before an auscultation.Keywords: attenuation, multiple-scattering medium, numerical modeling, phase velocity, ultrasonic measurements
Procedia PDF Downloads 2753825 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach
Authors: James Ladzekpo
Abstract:
Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.Keywords: diabetes, machine learning, prediction, biomarkers
Procedia PDF Downloads 553824 Thermoluminescent Response of Nanocrystalline BaSO4:Eu to 85 MeV Carbon Beams
Authors: Shaila Bahl, S. P. Lochab, Pratik Kumar
Abstract:
Nanotechnology and nanomaterials have attracted researchers from different fields, especially from the field of luminescence. Recent studies on various luminescent nanomaterials have shown their relevance in dosimetry of ionizing radiations for the measurements of high doses using the Thermoluminescence (TL) technique, where the conventional microcrystalline phosphors saturate. Ion beams have been used for diagnostic and therapeutic purposes due to their favorable profile of dose deposition at the end of the range known as the Bragg peak. While dealing with human beings, doses from these beams need to be measured with great precision and accuracy. Henceforth detailed investigations of suitable thermoluminescent dosimeters (TLD) for dose verification in ion beam irradiation are required. This paper investigates the TL response of nanocrystalline BaSO4 doped with Eu to 85 MeV carbon beam. The synthesis was done using Co-precipitation technique by mixing Barium chloride and ammonium sulphate solutions. To investigate the crystallinity and particle size, analytical techniques such as X-ray diffraction (XRD) and Transmission electron microscopy (TEM) were used which revealed the average particle sizes to 45 nm with orthorhombic structure. Samples in pellet form were irradiated by 85 MeV carbon beam in the fluence range of 1X1010-5X1013. TL glow curves of the irradiated samples show two prominent glow peaks at around 460 K and 495 K. The TL response is linear up to 1X1013 fluence after which saturation was observed. The wider linear TL response of nanocrystalline BaSO4: Eu and low fading make it a superior candidate as a dosimeter to be used for detecting the doses of carbon beam.Keywords: radiation, dosimetry, carbon ions, thermoluminescence
Procedia PDF Downloads 2883823 Brain-Computer Interface Based Real-Time Control of Fixed Wing and Multi-Rotor Unmanned Aerial Vehicles
Authors: Ravi Vishwanath, Saumya Kumaar, S. N. Omkar
Abstract:
Brain-computer interfacing (BCI) is a technology that is almost four decades old, and it was developed solely for the purpose of developing and enhancing the impact of neuroprosthetics. However, in the recent times, with the commercialization of non-invasive electroencephalogram (EEG) headsets, the technology has seen a wide variety of applications like home automation, wheelchair control, vehicle steering, etc. One of the latest developed applications is the mind-controlled quadrotor unmanned aerial vehicle. These applications, however, do not require a very high-speed response and give satisfactory results when standard classification methods like Support Vector Machine (SVM) and Multi-Layer Perceptron (MLPC). Issues are faced when there is a requirement for high-speed control in the case of fixed-wing unmanned aerial vehicles where such methods are rendered unreliable due to the low speed of classification. Such an application requires the system to classify data at high speeds in order to retain the controllability of the vehicle. This paper proposes a novel method of classification which uses a combination of Common Spatial Paradigm and Linear Discriminant Analysis that provides an improved classification accuracy in real time. A non-linear SVM based classification technique has also been discussed. Further, this paper discusses the implementation of the proposed method on a fixed-wing and VTOL unmanned aerial vehicles.Keywords: brain-computer interface, classification, machine learning, unmanned aerial vehicles
Procedia PDF Downloads 2833822 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks
Authors: Sulemana Ibrahim
Abstract:
Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks
Procedia PDF Downloads 623821 Performance Based Design of Masonry Infilled Reinforced Concrete Frames for Near-Field Earthquakes Using Energy Methods
Authors: Alok Madan, Arshad K. Hashmi
Abstract:
Performance based design (PBD) is an iterative exercise in which a preliminary trial design of the building structure is selected and if the selected trial design of the building structure does not conform to the desired performance objective, the trial design is revised. In this context, development of a fundamental approach for performance based seismic design of masonry infilled frames with minimum number of trials is an important objective. The paper presents a plastic design procedure based on the energy balance concept for PBD of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames subjected to near-field earthquakes. The proposed energy based plastic design procedure was implemented for trial performance based seismic design of representative masonry infilled reinforced concrete frames with various practically relevant distributions of masonry infill panels over the frame elevation. Non-linear dynamic analyses of the trial PBD of masonry infilled R/C frames was performed under the action of near-field earthquake ground motions. The results of non-linear dynamic analyses demonstrate that the proposed energy method is effective for performance based design of masonry infilled R/C frames under near-field as well as far-field earthquakes.Keywords: masonry infilled frame, energy methods, near-fault ground motions, pushover analysis, nonlinear dynamic analysis, seismic demand
Procedia PDF Downloads 2923820 Impact of the Electricity Market Prices during the COVID-19 Pandemic on Energy Storage Operation
Authors: Marin Mandić, Elis Sutlović, Tonći Modrić, Luka Stanić
Abstract:
With the restructuring and deregulation of the power system, storage owners, generation companies or private producers can offer their multiple services on various power markets and earn income in different types of markets, such as the day-ahead, real-time, ancillary services market, etc. During the COVID-19 pandemic, electricity prices, as well as ancillary services prices, increased significantly. The optimization of the energy storage operation was performed using a suitable model for simulating the operation of a pumped storage hydropower plant under market conditions. The objective function maximizes the income earned through energy arbitration, regulation-up, regulation-down and spinning reserve services. The optimization technique used for solving the objective function is mixed integer linear programming (MILP). In numerical examples, the pumped storage hydropower plant operation has been optimized considering the already achieved hourly electricity market prices from Nord Pool for the pre-pandemic (2019) and the pandemic (2020 and 2021) years. The impact of the electricity market prices during the COVID-19 pandemic on energy storage operation is shown through the analysis of income, operating hours, reserved capacity and consumed energy for each service. The results indicate the role of energy storage during a significant fluctuation in electricity and services prices.Keywords: electrical market prices, electricity market, energy storage optimization, mixed integer linear programming (MILP) optimization
Procedia PDF Downloads 1743819 Times2D: A Time-Frequency Method for Time Series Forecasting
Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan
Abstract:
Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation
Procedia PDF Downloads 423818 Benefits of Occupational Therapy for Children with Intellectual Disabilities in the Aspects of Vocational Activities and Instrumental Activities of Daily Life
Authors: Shakhawath Hossain, Tazkia Tahsin
Abstract:
Introduction/Background: Intellectual disability is a disability characterized by significant limitations both in intellectual functioning and in adaptive behavior, which covers many everyday social and practical skills. Vocational education is a multi-professional approach that is provided to individuals of working age with health-related impairments, limitations, or restrictions with work functioning and whose primary aim is to optimize work participation. Instrumental Activities of Daily Living activities to support daily life within the home and community. Like as community mobility, financial management, meal preparation, and clean-up, shopping. Material and Method: Electronic searches of Medline, PubMed, Google scholar, OT Seeker literature using the key terms of intellectual disability, vocational rehabilitation, instrumental activities of daily living and Occupational Therapy, as well as a thorough manual search for relevant literature. Results: There were 13 articles, all qualitative and quantitative, which are included in this review. All studies were mixed methods in design. To take the Occupational Therapy services, there is a significant improvement in their children's various areas like as sensory issues, cognitive abilities, perceptual skills, visual, motor planning, and group therapy. After taking the vocational and instrumental activities of daily living training children with intellectual disabilities to participate in their daily activities and work as an employee different company or organizations. Conclusion: The persons with intellectual disability are an integral part of our society who deserves social support and opportunities like other human beings. From the result section of the project papers, it is found that the significant benefits of Occupational Therapy services in the aspects of vocational and instrumental activities of daily living.Keywords: occupational therapy, daily living activities, intellectual disabilities, instrumental ADL
Procedia PDF Downloads 1303817 Applying Semi-Automatic Digital Aerial Survey Technology and Canopy Characters Classification for Surface Vegetation Interpretation of Archaeological Sites
Authors: Yung-Chung Chuang
Abstract:
The cultural layers of archaeological sites are mainly affected by surface land use, land cover, and root system of surface vegetation. For this reason, continuous monitoring of land use and land cover change is important for archaeological sites protection and management. However, in actual operation, on-site investigation and orthogonal photograph interpretation require a lot of time and manpower. For this reason, it is necessary to perform a good alternative for surface vegetation survey in an automated or semi-automated manner. In this study, we applied semi-automatic digital aerial survey technology and canopy characters classification with very high-resolution aerial photographs for surface vegetation interpretation of archaeological sites. The main idea is based on different landscape or forest type can easily be distinguished with canopy characters (e.g., specific texture distribution, shadow effects and gap characters) extracted by semi-automatic image classification. A novel methodology to classify the shape of canopy characters using landscape indices and multivariate statistics was also proposed. Non-hierarchical cluster analysis was used to assess the optimal number of canopy character clusters and canonical discriminant analysis was used to generate the discriminant functions for canopy character classification (seven categories). Therefore, people could easily predict the forest type and vegetation land cover by corresponding to the specific canopy character category. The results showed that the semi-automatic classification could effectively extract the canopy characters of forest and vegetation land cover. As for forest type and vegetation type prediction, the average prediction accuracy reached 80.3%~91.7% with different sizes of test frame. It represented this technology is useful for archaeological site survey, and can improve the classification efficiency and data update rate.Keywords: digital aerial survey, canopy characters classification, archaeological sites, multivariate statistics
Procedia PDF Downloads 1423816 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 4903815 Nonlinear Passive Shunt for Electroacoustic Absorbers Using Nonlinear Energy Sink
Authors: Diala Bitar, Emmanuel Gourdon, Claude H. Lamarque, Manuel Collet
Abstract:
Acoustic absorber devices play an important role reducing the noise at the propagation and reception paths. An electroacoustic absorber consists of a loudspeaker coupled to an electric shunt circuit, where the membrane is playing the role of an absorber/reflector of sound. Although the use of linear shunt resistors at the transducer terminals, has shown to improve the performances of the dynamical absorbers, it is nearly efficient in a narrow frequency band. Therefore, and since nonlinear phenomena are promising for their ability to absorb the vibrations and sound on a larger frequency range, we propose to couple a nonlinear electric shunt circuit at the loudspeaker terminals. Then, the equivalent model can be described by a 2 degrees of freedom system, consisting of a primary linear oscillator describing the dynamics of the loudspeaker membrane, linearly coupled to a cubic nonlinear energy sink (NES). The system is analytically treated for the case of 1:1 resonance, using an invariant manifold approach at different time scales. The proposed methodology enables us to detect the equilibrium points and fold singularities at the first slow time scales, providing a predictive tool to design the nonlinear circuit shunt during the energy exchange process. The preliminary results are promising; a significant improvement of acoustic absorption performances are obtained.Keywords: electroacoustic absorber, multiple-time-scale with small finite parameter, nonlinear energy sink, nonlinear passive shunt
Procedia PDF Downloads 2203814 Characteristics and Flight Test Analysis of a Fixed-Wing UAV with Hover Capability
Authors: Ferit Çakıcı, M. Kemal Leblebicioğlu
Abstract:
In this study, characteristics and flight test analysis of a fixed-wing unmanned aerial vehicle (UAV) with hover capability is analyzed. The base platform is chosen as a conventional airplane with throttle, ailerons, elevator and rudder control surfaces, that inherently allows level flight. Then this aircraft is mechanically modified by the integration of vertical propellers as in multi rotors in order to provide hover capability. The aircraft is modeled using basic aerodynamical principles and linear models are constructed utilizing small perturbation theory for trim conditions. Flight characteristics are analyzed by benefiting from linear control theory’s state space approach. Distinctive features of the aircraft are discussed based on analysis results with comparison to conventional aircraft platform types. A hybrid control system is proposed in order to reveal unique flight characteristics. The main approach includes design of different controllers for different modes of operation and a hand-over logic that makes flight in an enlarged flight envelope viable. Simulation tests are performed on mathematical models that verify asserted algorithms. Flight tests conducted in real world revealed the applicability of the proposed methods in exploiting fixed-wing and rotary wing characteristics of the aircraft, which provide agility, survivability and functionality.Keywords: flight test, flight characteristics, hybrid aircraft, unmanned aerial vehicle
Procedia PDF Downloads 3293813 Parameters Affecting the Elasto-Plastic Behavior of Outrigger Braced Walls to Earthquakes
Authors: T. A. Sakr, Hanaa E. Abd-El-Mottaleb
Abstract:
Outrigger-braced wall systems are commonly used to provide high rise buildings with the required lateral stiffness for wind and earthquake resistance. The existence of outriggers adds to the stiffness and strength of walls as reported by several studies. The effects of different parameters on the elasto-plastic dynamic behavior of outrigger-braced wall systems to earthquakes are investigated in this study. Parameters investigated include outrigger stiffness, concrete strength, and reinforcement arrangement as the main design parameters in wall design. In addition to being significant to the wall behavior, such parameters may lead to the change of failure mode and the delay of crack propagation and consequently failure as the wall is excited by earthquakes. Bi-linear stress-strain relation for concrete with limited tensile strength and truss members with bi-linear stress-strain relation for reinforcement were used in the finite element analysis of the problem. The famous earthquake record, El-Centro, 1940 is used in the study. Emphasis was given to the lateral drift, normal stresses and crack pattern as behavior controlling determinants. Results indicated significant effect of the studied parameters such that stiffer outrigger, higher grade concrete and concentrating the reinforcement at wall edges enhance the behavior of the system. Concrete stresses and cracking behavior are sigbificantly enhanced while lesser drift improvements are observed.Keywords: outrigger, shear wall, earthquake, nonlinear
Procedia PDF Downloads 2833812 Preparation of Pegylated Interferon Alpha-2b with High Antiviral Activity Using Linear 20 KDa Polyethylene Glycol Derivative
Authors: Ehab El-Dabaa, Omnia Ali, Mohamed Abd El-Hady, Ahmed Osman
Abstract:
Recombinant human interferon alpha 2 (rhIFN-α2) is FDA approved for treatment of some viral and malignant diseases. Approved pegylated rhIFN-α2 drugs have highly improved pharmacokinetics, pharmacodynamics and therapeutic efficiency compared to native protein. In this work, we studied the pegylation of purified properly refolded rhIFN-α2b using linear 20kDa PEG-NHS (polyethylene glycol- N-hydroxysuccinimidyl ester) to prepare pegylated rhIFN-α2b with high stability and activity. The effect of different parameters like rhIFN-α2b final concentration, pH, rhIFN-α2b/PEG molar ratios and reaction time on the efficiency of pegylation (high percentage of monopegylated rhIFN-α2b) have been studied in small scale (100µl) pegylation reaction trials. Study of the percentages of different components of these reactions (mono, di, polypegylated rhIFN-α2b and unpegylated rhIFN-α2b) indicated that 2h is optimum time to complete the reaction. The pegylation efficiency increased at pH 8 (57.9%) by reducing the protein concentration to 1mg/ml and reducing the rhIFN-α2b/PEG ratio to 1:2. Using larger scale pegylation reaction (65% pegylation efficiency), ion exchange chromatography method has been optimized to prepare and purify the monopegylated rhIFN-α2b with high purity (96%). The prepared monopegylated rhIFN-α2b had apparent Mwt of approximately 65 kDa and high in vitro antiviral activity (2.1x10⁷ ± 0.8 x10⁷ IU/mg). Although it retained approximately 8.4 % of the antiviral activity of the unpegylated rhIFN-α2b, its activity is high compared to other pegylated rhIFN-α2 developed by using similar approach or higher molecular weight branched PEG.Keywords: antiviral activity, rhIFN-α2b, pegylation, pegylation efficiency
Procedia PDF Downloads 1773811 Comprehensive Feature Extraction for Optimized Condition Assessment of Fuel Pumps
Authors: Ugochukwu Ejike Akpudo, Jank-Wook Hur
Abstract:
The increasing demand for improved productivity, maintainability, and reliability has prompted rapidly increasing research studies on the emerging condition-based maintenance concept- Prognostics and health management (PHM). Varieties of fuel pumps serve critical functions in several hydraulic systems; hence, their failure can have daunting effects on productivity, safety, etc. The need for condition monitoring and assessment of these pumps cannot be overemphasized, and this has led to the uproar in research studies on standard feature extraction techniques for optimized condition assessment of fuel pumps. By extracting time-based, frequency-based and the more robust time-frequency based features from these vibrational signals, a more comprehensive feature assessment (and selection) can be achieved for a more accurate and reliable condition assessment of these pumps. With the aid of emerging deep classification and regression algorithms like the locally linear embedding (LLE), we propose a method for comprehensive condition assessment of electromagnetic fuel pumps (EMFPs). Results show that the LLE as a comprehensive feature extraction technique yields better feature fusion/dimensionality reduction results for condition assessment of EMFPs against the use of single features. Also, unlike other feature fusion techniques, its capabilities as a fault classification technique were explored, and the results show an acceptable accuracy level using standard performance metrics for evaluation.Keywords: electromagnetic fuel pumps, comprehensive feature extraction, condition assessment, locally linear embedding, feature fusion
Procedia PDF Downloads 1173810 A Generalized Weighted Loss for Support Vextor Classification and Multilayer Perceptron
Authors: Filippo Portera
Abstract:
Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the prediction, in case of a regression task. In the present, we present several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vextor Classification and a regression net for Multylayer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better.Keywords: loss, binary-classification, MLP, weights, regression
Procedia PDF Downloads 953809 Some Inequalities Related with Starlike Log-Harmonic Mappings
Authors: Melike Aydoğan, Dürdane Öztürk
Abstract:
Let H(D) be the linear space of all analytic functions defined on the open unit disc. A log-harmonic mappings is a solution of the nonlinear elliptic partial differential equation where w(z) ∈ H(D) is second dilatation such that |w(z)| < 1 for all z ∈ D. The aim of this paper is to define some inequalities of starlike logharmonic functions of order α(0 ≤ α ≤ 1).Keywords: starlike log-harmonic functions, univalent functions, distortion theorem
Procedia PDF Downloads 5263808 Stability-Indicating High-Performance Thin-Layer Chromatography Method for Estimation of Naftopidil
Authors: P. S. Jain, K. D. Bobade, S. J. Surana
Abstract:
A simple, selective, precise and Stability-indicating High-performance thin-layer chromatographic method for analysis of Naftopidil both in a bulk and in pharmaceutical formulation has been developed and validated. The method employed, HPTLC aluminium plates precoated with silica gel as the stationary phase. The solvent system consisted of hexane: ethyl acetate: glacial acetic acid (4:4:2 v/v). The system was found to give compact spot for Naftopidil (Rf value of 0.43±0.02). Densitometric analysis of Naftopidil was carried out in the absorbance mode at 253 nm. The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.999±0.0001 with respect to peak area in the concentration range 200-1200 ng per spot. The method was validated for precision, recovery and robustness. The limits of detection and quantification were 20.35 and 61.68 ng per spot, respectively. Naftopidil was subjected to acid and alkali hydrolysis, oxidation and thermal degradation. The drug undergoes degradation under acidic, basic, oxidation and thermal conditions. This indicates that the drug is susceptible to acid, base, oxidation and thermal conditions. The degraded product was well resolved from the pure drug with significantly different Rf value. Statistical analysis proves that the method is repeatable, selective and accurate for the estimation of investigated drug. The proposed developed HPTLC method can be applied for identification and quantitative determination of Naftopidil in bulk drug and pharmaceutical formulation.Keywords: naftopidil, HPTLC, validation, stability, degradation
Procedia PDF Downloads 4003807 Vehicle Routing Problem Considering Alternative Roads under Triple Bottom Line Accounting
Authors: Onur Kaya, Ilknur Tukenmez
Abstract:
In this study, we consider vehicle routing problems on networks with alternative direct links between nodes, and we analyze a multi-objective problem considering the financial, environmental and social objectives in this context. In real life, there might exist several alternative direct roads between two nodes, and these roads might have differences in terms of their lengths and durations. For example, a road might be shorter than another but might require longer time due to traffic and speed limits. Similarly, some toll roads might be shorter or faster but require additional payment, leading to higher costs. We consider such alternative links in our problem and develop a mixed integer linear programming model that determines which alternative link to use between two nodes, in addition to determining the optimal routes for different vehicles, depending on the model objectives and constraints. We consider the minimum cost routing as the financial objective for the company, minimizing the CO2 emissions and gas usage as the environmental objectives, and optimizing the driver working conditions/working hours, and minimizing the risks of accidents as the social objectives. With these objective functions, we aim to determine which routes, and which alternative links should be used in addition to the speed choices on each link. We discuss the results of the developed vehicle routing models and compare their results depending on the system parameters.Keywords: vehicle routing, alternative links between nodes, mixed integer linear programming, triple bottom line accounting
Procedia PDF Downloads 4073806 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory
Authors: Damir Latypov
Abstract:
A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory
Procedia PDF Downloads 1543805 Preparation of IPNs and Effect of Swift Heavy Ions Irradiation on their Physico-Chemical Properties
Authors: B. S Kaith, K. Sharma, V. Kumar, S. Kalia
Abstract:
Superabsorbent are three-dimensional networks of linear or branched polymeric chains which can uptake large volume of biological fluids. The ability is due to the presence of functional groups like –NH2, -COOH and –OH. Such cross-linked products based on natural materials, such as cellulose, starch, dextran, gum and chitosan, because of their easy availability, low production cost, non-toxicity and biodegradability have attracted the attention of Scientists and Technologists all over the world. Since natural polymers have better biocompatibility and are non-toxic than most synthetic one, therefore, such materials can be applied in the preparation of controlled drug delivery devices, biosensors, tissue engineering, contact lenses, soil conditioning, removal of heavy metal ions and dyes. Gums are natural potential antioxidants and are used as food additives. They have excellent properties like high solubility, pH stability, non-toxicity and gelling characteristics. Till date lot of methods have been applied for the synthesis and modifications of cross-linked materials with improved properties suitable for different applications. It is well known that ion beam irradiation can play a crucial role to synthesize, modify, crosslink or degrade polymeric materials. High energetic heavy ions irradiation on polymer film induces significant changes like chain scission, cross-linking, structural changes, amorphization and degradation in bulk. Various researchers reported the effects of low and heavy ion irradiation on the properties of polymeric materials and observed significant improvement in optical, electrical, chemical, thermal and dielectric properties. Moreover, modifications induced in the materials mainly depend on the structure, the ion beam parameters like energy, linear energy transfer, fluence, mass, charge and the nature of the target material. Ion-beam irradiation is a useful technique for improving the surface properties of biodegradable polymers without missing the bulk properties. Therefore, a considerable interest has been grown to study the effects of SHIs irradiation on the properties of synthesized semi-IPNs and IPNs. The present work deals with the preparation of semi-IPNs and IPNs and impact of SHI like O7+ and Ni9+ irradiation on optical, chemical, structural, morphological and thermal properties along with impact on different applications. The results have been discussed on the basis of Linear Energy Transfer (LET) of the ions.Keywords: adsorbent, gel, IPNs, semi-IPNs
Procedia PDF Downloads 3723804 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients
Authors: Bliss Singhal
Abstract:
Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels
Procedia PDF Downloads 843803 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data
Authors: Gayathri Nagarajan, L. D. Dhinesh Babu
Abstract:
Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform
Procedia PDF Downloads 2403802 Electricity Load Modeling: An Application to Italian Market
Authors: Giovanni Masala, Stefania Marica
Abstract:
Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression
Procedia PDF Downloads 3953801 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 1503800 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section
Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert
Abstract:
Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics
Procedia PDF Downloads 2583799 Field Prognostic Factors on Discharge Prediction of Traumatic Brain Injuries
Authors: Mohammad Javad Behzadnia, Amir Bahador Boroumand
Abstract:
Introduction: Limited facility situations require allocating the most available resources for most casualties. Accordingly, Traumatic Brain Injury (TBI) is the one that may need to transport the patient as soon as possible. In a mass casualty event, deciding when the facilities are restricted is hard. The Extended Glasgow Outcome Score (GOSE) has been introduced to assess the global outcome after brain injuries. Therefore, we aimed to evaluate the prognostic factors associated with GOSE. Materials and Methods: In a multicenter cross-sectional study conducted on 144 patients with TBI admitted to trauma emergency centers. All the patients with isolated TBI who were mentally and physically healthy before the trauma entered the study. The patient’s information was evaluated, including demographic characteristics, duration of hospital stays, mechanical ventilation on admission laboratory measurements, and on-admission vital signs. We recorded the patients’ TBI-related symptoms and brain computed tomography (CT) scan findings. Results: GOSE assessments showed an increasing trend by the comparison of on-discharge (7.47 ± 1.30), within a month (7.51 ± 1.30), and within three months (7.58 ± 1.21) evaluations (P < 0.001). On discharge, GOSE was positively correlated with Glasgow Coma Scale (GCS) (r = 0.729, P < 0.001) and motor GCS (r = 0.812, P < 0.001), and inversely with age (r = −0.261, P = 0.002), hospitalization period (r = −0.678, P < 0.001), pulse rate (r = −0.256, P = 0.002) and white blood cell (WBC). Among imaging signs and trauma-related symptoms in univariate analysis, intracranial hemorrhage (ICH), interventricular hemorrhage (IVH) (P = 0.006), subarachnoid hemorrhage (SAH) (P = 0.06; marginally at P < 0.1), subdural hemorrhage (SDH) (P = 0.032), and epidural hemorrhage (EDH) (P = 0.037) were significantly associated with GOSE at discharge in multivariable analysis. Conclusion: Our study showed some predictive factors that could help to decide which casualty should transport earlier to a trauma center. According to the current study findings, GCS, pulse rate, WBC, and among imaging signs and trauma-related symptoms, ICH, IVH, SAH, SDH, and EDH are significant independent predictors of GOSE at discharge in TBI patients.Keywords: field, Glasgow outcome score, prediction, traumatic brain injury.
Procedia PDF Downloads 753798 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 34