Search results for: estimation algorithms
1109 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials
Authors: Rajesh Kumar G
Abstract:
A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.Keywords: adaptive design, simulation, borrowing data, bayesian model
Procedia PDF Downloads 771108 A Weighted Sum Particle Swarm Approach (WPSO) Combined with a Novel Feasibility-Based Ranking Strategy for Constrained Multi-Objective Optimization of Compact Heat Exchangers
Authors: Milad Yousefi, Moslem Yousefi, Ricarpo Poley, Amer Nordin Darus
Abstract:
Design optimization of heat exchangers is a very complicated task that has been traditionally carried out based on a trial-and-error procedure. To overcome the difficulties of the conventional design approaches especially when a large number of variables, constraints and objectives are involved, a new method based on a well-stablished evolutionary algorithm, particle swarm optimization (PSO), weighted sum approach and a novel constraint handling strategy is presented in this study. Since, the conventional constraint handling strategies are not effective and easy-to-implement in multi-objective algorithms, a novel feasibility-based ranking strategy is introduced which is both extremely user-friendly and effective. A case study from industry has been investigated to illustrate the performance of the presented approach. The results show that the proposed algorithm can find the near pareto-optimal with higher accuracy when it is compared to conventional non-dominated sorting genetic algorithm II (NSGA-II). Moreover, the difficulties of a trial-and-error process for setting the penalty parameters is solved in this algorithm.Keywords: Heat exchanger, Multi-objective optimization, Particle swarm optimization, NSGA-II Constraints handling.
Procedia PDF Downloads 5551107 Application of Nonparametric Geographically Weighted Regression to Evaluate the Unemployment Rate in East Java
Authors: Sifriyani Sifriyani, I Nyoman Budiantara, Sri Haryatmi, Gunardi Gunardi
Abstract:
East Java Province has a first rank as a province that has the most counties and cities in Indonesia and has the largest population. In 2015, the population reached 38.847.561 million, this figure showed a very high population growth. High population growth is feared to lead to increase the levels of unemployment. In this study, the researchers mapped and modeled the unemployment rate with 6 variables that were supposed to influence. Modeling was done by nonparametric geographically weighted regression methods with truncated spline approach. This method was chosen because spline method is a flexible method, these models tend to look for its own estimation. In this modeling, there were point knots, the point that showed the changes of data. The selection of the optimum point knots was done by selecting the most minimun value of Generalized Cross Validation (GCV). Based on the research, 6 variables were declared to affect the level of unemployment in eastern Java. They were the percentage of population that is educated above high school, the rate of economic growth, the population density, the investment ratio of total labor force, the regional minimum wage and the ratio of the number of big industry and medium scale industry from the work force. The nonparametric geographically weighted regression models with truncated spline approach had a coefficient of determination 98.95% and the value of MSE equal to 0.0047.Keywords: East Java, nonparametric geographically weighted regression, spatial, spline approach, unemployed rate
Procedia PDF Downloads 3211106 Roughness Discrimination Using Bioinspired Tactile Sensors
Authors: Zhengkun Yi
Abstract:
Surface texture discrimination using artificial tactile sensors has attracted increasing attentions in the past decade as it can endow technical and robot systems with a key missing ability. However, as a major component of texture, roughness has rarely been explored. This paper presents an approach for tactile surface roughness discrimination, which includes two parts: (1) design and fabrication of a bioinspired artificial fingertip, and (2) tactile signal processing for tactile surface roughness discrimination. The bioinspired fingertip is comprised of two polydimethylsiloxane (PDMS) layers, a polymethyl methacrylate (PMMA) bar, and two perpendicular polyvinylidene difluoride (PVDF) film sensors. This artificial fingertip mimics human fingertips in three aspects: (1) Elastic properties of epidermis and dermis in human skin are replicated by the two PDMS layers with different stiffness, (2) The PMMA bar serves the role analogous to that of a bone, and (3) PVDF film sensors emulate Meissner’s corpuscles in terms of both location and response to the vibratory stimuli. Various extracted features and classification algorithms including support vector machines (SVM) and k-nearest neighbors (kNN) are examined for tactile surface roughness discrimination. Eight standard rough surfaces with roughness values (Ra) of 50 μm, 25 μm, 12.5 μm, 6.3 μm 3.2 μm, 1.6 μm, 0.8 μm, and 0.4 μm are explored. The highest classification accuracy of (82.6 ± 10.8) % can be achieved using solely one PVDF film sensor with kNN (k = 9) classifier and the standard deviation feature.Keywords: bioinspired fingertip, classifier, feature extraction, roughness discrimination
Procedia PDF Downloads 3131105 Fuzzy Time Series Forecasting Based on Fuzzy Logical Relationships, PSO Technique, and Automatic Clustering Algorithm
Authors: A. K. M. Kamrul Islam, Abdelhamid Bouchachia, Suang Cang, Hongnian Yu
Abstract:
Forecasting model has a great impact in terms of prediction and continues to do so into the future. Although many forecasting models have been studied in recent years, most researchers focus on different forecasting methods based on fuzzy time series to solve forecasting problems. The forecasted models accuracy fully depends on the two terms that are the length of the interval in the universe of discourse and the content of the forecast rules. Moreover, a hybrid forecasting method can be an effective and efficient way to improve forecasts rather than an individual forecasting model. There are different hybrids forecasting models which combined fuzzy time series with evolutionary algorithms, but the performances are not quite satisfactory. In this paper, we proposed a hybrid forecasting model which deals with the first order as well as high order fuzzy time series and particle swarm optimization to improve the forecasted accuracy. The proposed method used the historical enrollments of the University of Alabama as dataset in the forecasting process. Firstly, we considered an automatic clustering algorithm to calculate the appropriate interval for the historical enrollments. Then particle swarm optimization and fuzzy time series are combined that shows better forecasting accuracy than other existing forecasting models.Keywords: fuzzy time series (fts), particle swarm optimization, clustering algorithm, hybrid forecasting model
Procedia PDF Downloads 2501104 The Signaling Power of ESG Accounting in Sub-Sahara Africa: A Dynamic Model Approach
Authors: Haruna Maama
Abstract:
Environmental, social and governance (ESG) reporting is gaining considerable attention despite being voluntary. Meanwhile, it consumes resources to provide ESG reporting, raising a question of its value relevance. The study examined the impact of ESG reporting on the market value of listed firms in SSA. The annual and integrated reports of 276 listed sub-Sahara Africa (SSA) firms. The integrated reporting scores of the firm were analysed using a content analysis method. A multiple regression estimation technique using a GMM approach was employed for the analysis. The results revealed that ESG has a positive relationship with firms’ market value, suggesting that investors are interested in the ESG information disclosure of firms in SSA. This suggests that extensive ESG disclosures are attempts by firms to obtain the approval of powerful social, political and environmental stakeholders, especially institutional investors. Furthermore, the market value analysis evidence is consistent with signalling theory, which postulates that firms provide integrated reports as a signal to influence the behaviour of stakeholders. This finding reflects the value placed on investors' social, environmental and governance disclosures, which affirms the views that conventional investors would care about the social, environmental and governance issues of their potential or existing investee firms. Overall, the evidence is consistent with the prediction of signalling theory. In the context of this theory, integrated reporting is seen as part of firms' overall competitive strategy to influence investors' behaviour. The findings of this study make unique contributions to knowledge and practice in corporate reporting.Keywords: environmental accounting, ESG accounting, signalling theory, sustainability reporting, sub-saharan Africa
Procedia PDF Downloads 771103 Uncertainty Assessment in Building Energy Performance
Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud
Abstract:
The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method
Procedia PDF Downloads 4591102 Optimization of Spatial Light Modulator to Generate Aberration Free Optical Traps
Authors: Deepak K. Gupta, T. R. Ravindran
Abstract:
Holographic Optical Tweezers (HOTs) in general use iterative algorithms such as weighted Gerchberg-Saxton (WGS) to generate multiple traps, which produce traps with 99% uniformity theoretically. But in experiments, it is the phase response of the spatial light modulator (SLM) which ultimately determines the efficiency, uniformity, and quality of the trap spots. In general, SLMs show a nonlinear phase response behavior, and they may even have asymmetric phase modulation depth before and after π. This affects the resolution with which the gray levels are addressed before and after π, leading to a degraded trap performance. We present a method to optimize the SLM for a linear phase response behavior along with a symmetric phase modulation depth around π. Further, we optimize the SLM for its varying phase response over different spatial regions by optimizing the brightness/contrast and gamma of the hologram in different subsections. We show the effect of the optimization on an array of trap spots resulting in improved efficiency and uniformity. We also calculate the spot sharpness metric and trap performance metric and show a tightly focused spot with reduced aberration. The trap performance is compared by calculating the trap stiffness of a trapped particle in a given trap spot before and after aberration correction. The trap stiffness is found to improve by 200% after the optimization.Keywords: spatial light modulator, optical trapping, aberration, phase modulation
Procedia PDF Downloads 1881101 A New Optimization Algorithm for Operation of a Microgrid
Authors: Sirus Mohammadi, Rohala Moghimi
Abstract:
The main advantages of microgrids are high energy efficiency through the application of Combined Heat and Power (CHP), high quality and reliability of the delivered electric energy and environmental and economic advantages. This study presents an energy management system (EMS) to optimize the operation of the microgrid (MG). In this paper an Adaptive Modified Firefly Algorithm (AMFA) is presented for optimal operation of a typical MG with renewable energy sources (RESs) accompanied by a back-up Micro-Turbine/Fuel Cell/Battery hybrid power source to level the power mismatch or to store the energy surplus when it’s needed. The problem is formulated as a nonlinear constraint problem to minimize the total operating cost. The management of Energy storage system (ESS), economic load dispatch and operation optimization of distributed generation (DG) are simplified into a single-object optimization problem in the EMS. The proposed algorithm is tested on a typical grid-connected MG including WT/PV/Micro Turbine/Fuel Cell and Energy Storage Devices (ESDs) then its superior performance is compared with those from other evolutionary algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Fuzzy Self Adaptive PSO (FSAPSO), Chaotic Particle PSO (CPSO), Adaptive Modified PSO (AMPSO), and Firefly Algorithm (FA).Keywords: microgrid, operation management, optimization, firefly algorithm (AMFA)
Procedia PDF Downloads 3411100 Resource Leveling Optimization in Construction Projects of High Voltage Substations Using Nature-Inspired Intelligent Evolutionary Algorithms
Authors: Dimitrios Ntardas, Alexandros Tzanetos, Georgios Dounias
Abstract:
High Voltage Substations (HVS) are the intermediate step between production of power and successfully transmitting it to clients, making them one of the most important checkpoints in power grids. Nowadays - renewable resources and consequently distributed generation are growing fast, the construction of HVS is of high importance both in terms of quality and time completion so that new energy producers can quickly and safely intergrade in power grids. The resources needed, such as machines and workers, should be carefully allocated so that the construction of a HVS is completed on time, with the lowest possible cost (e.g. not spending additional cost that were not taken into consideration, because of project delays), but in the highest quality. In addition, there are milestones and several checkpoints to be precisely achieved during construction to ensure the cost and timeline control and to ensure that the percentage of governmental funding will be granted. The management of such a demanding project is a NP-hard problem that consists of prerequisite constraints and resource limits for each task of the project. In this work, a hybrid meta-heuristic method is implemented to solve this problem. Meta-heuristics have been proven to be quite useful when dealing with high-dimensional constraint optimization problems. Hybridization of them results in boost of their performance.Keywords: hybrid meta-heuristic methods, substation construction, resource allocation, time-cost efficiency
Procedia PDF Downloads 1521099 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test
Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman
Abstract:
At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. These findings need to be confirmed with a greater number of stations across other Australian states.Keywords: floods, FLIKE, probability distributions, flood frequency, outlier
Procedia PDF Downloads 4501098 Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms
Authors: Aqsa Ali, Aleem Mushtaq, Attaullah Memon, Monna
Abstract:
In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFT LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.Keywords: American sign language, assistive hand gesture interpreter, human-machine interface, machine learning, sensing glove
Procedia PDF Downloads 3021097 The Effects of Time and Cyclic Loading to the Axial Capacity for Offshore Pile in Shallow Gas
Authors: Christian H. Girsang, M. Razi B. Mansoor, Noorizal N. Huang
Abstract:
An offshore platform was installed in 1977 at about 260km offshore West Malaysia at the water depth of 73.6m. Twelve (12) piles were installed with four (4) are skirt piles. The piles have 1.219m outside diameter and wall thickness of 31mm and were driven to 109m below seabed. Deterministic analyses of the pile capacity under axial loading were conducted using the current API (American Petroleum Institute) method and the four (4) CPT-based methods: the ICP (Imperial College Pile)-method, the NGI (Norwegian Geotechnical Institute)-Method, the UWA (University of Western Australia)-method and the Fugro-method. A statistical analysis of the model uncertainty associated with each pile capacity method was performed. There were two (2) piles analysed: Pile 1 and piles other than Pile 1, where Pile 1 is the pile that was most affected by shallow gas problems. Using the mean estimate of soil properties, the five (5) methods used for deterministic estimation of axial pile capacity in compression predict an axial capacity from 28 to 42MN for Pile 1 and 32 to 49MN for piles other than Pile 1. These values refer to the static capacity shortly after pile installation. They do not include the effects of cyclic loading during the design storm or time after installation on the axial pile capacity. On average, the axial pile capacity is expected to have increased by about 40% because of ageing since the installation of the platform in 1977. On the other hand, the cyclic loading effects during the design storm may reduce the axial capacity of the piles by around 25%. The study concluded that all piles have sufficient safety factor when the pile aging and cyclic loading effect are considered, as all safety factors are above 2.0 for maximum operating and storm loads.Keywords: axial capacity, cyclic loading, pile ageing, shallow gas
Procedia PDF Downloads 3451096 A Low-Latency Quadratic Extended Domain Modular Multiplier for Bilinear Pairing Based on Non-Least Positive Multiplication
Authors: Yulong Jia, Xiang Zhang, Ziyuan Wu, Shiji Hu
Abstract:
The calculation of bilinear pairing is the core of the SM9 algorithm, which relies on the underlying prime domain algorithm and the quadratic extension domain algorithm. Among the field algorithms, modular multiplication operation is the most time-consuming part. Therefore, the underlying modular multiplication algorithm is optimized to maximize the operation speed of bilinear pairings. This paper uses a modular multiplication method based on non-least positive (NLP) combined with Karatsuba and schoolbook multiplication to improve the Montgomery algorithm. At the same time, according to the characteristics of multiplication operation in the quadratic extension domain, a quadratic extension domain FP2-NLP modular multiplication algorithm for bilinear pairings is proposed, which effectively reduces the operation time of modular multiplication in the quadratic extension domain. The sub-expanded domain Fp₂ -NLP modular multiplication algorithm effectively reduces the operation time of modular multiplication under the second-expanded domain. The multiplication unit in the quadratic extension domain is implemented using SMIC55nm process, and two different implementation architectures are designed to cope with different application scenarios. Compared with the existing related literature, The output latency of this design can reach a minimum of 15 cycles. The shortest time for calculating the (AB+CD)r⁻¹ mod form is 37.5ns, and the comprehensive area-time product (AT) is 11400. The final R-ate pairing algorithm hardware accelerator consumes 2670k equivalent logic gates and 1.8ms computing time in 55nm process.Keywords: sm9, hardware, NLP, Montgomery
Procedia PDF Downloads 81095 Improving the Run Times of Existing and Historical Demand Models Using Simple Python Scripting
Authors: Abhijeet Ostawal, Parmjit Lall
Abstract:
The run times for a large strategic model that we were managing had become too long leading to delays in project delivery, increased costs and loss in productivity. Software developers are continuously working towards developing more efficient tools by changing their algorithms and processes. The issue faced by our team was how do you apply the latest technologies on validated existing models which are based on much older versions of software that do not have the latest software capabilities. The multi-model transport model that we had could only be run in sequential assignment order. Recent upgrades to the software now allowed the assignment to be run in parallel, a concept called parallelization. Parallelization is a Python script working only within the latest version of the software. A full model transfer to the latest version was not possible due to time, budget and the potential changes in trip assignment. This article is to show the method to adapt and update the Python script in such a way that it can be used in older software versions by calling the latest version and then recalling the old version for assignment model without affecting the results. Through a process of trial-and-error run time savings of up to 30-40% have been achieved. Assignment results were maintained within the older version and through this learning process we’ve applied this methodology to other even older versions of the software resulting in huge time savings, more productivity and efficiency for both client and consultant.Keywords: model run time, demand model, parallelisation, python scripting
Procedia PDF Downloads 1181094 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier
Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh
Abstract:
This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems
Procedia PDF Downloads 461093 Estimation of Twist Loss in the Weft Yarn during Air-Jet Weft Insertion
Authors: Muhammad Umair, Yasir Nawab, Khubab Shaker, Muhammad Maqsood, Adeel Zulfiqar, Danish Mahmood Baitab
Abstract:
Fabric is a flexible woven material consisting of a network of natural or artificial fibers often referred to as thread or yarn. Today fabrics are produced by weaving, braiding, knitting, tufting and non-woven. Weaving is a method of fabric production in which warp and weft yarns are interlaced perpendicular to each other. There is infinite number of ways for the interlacing of warp and weft yarn. Each way produces a different fabric structure. The yarns parallel to the machine direction are called warp yarns and the yarns perpendicular to the machine direction are called weft or filling yarns. Air jet weaving is the modern method of weft insertion and considered as high speed loom. The twist loss in air jet during weft insertion affects the strength. The aim of this study was to investigate the effect of twist change in weft yarn during air-jet weft insertion. A total number of 8 samples were produced using 1/1 plain and 3/1 twill weave design with two fabric widths having same loom settings. Two different types of yarns like cotton and PC blend were used. The effect of material type, weave design and fabric width on twist change of weft yarn was measured and discussed. Twist change in the different types of weft yarn and weave design was measured and compared the twist change in the weft yarn with the yarn before weft yarn insertion and twist loss is measured. Wider fabric leads to higher twist loss in the yarn.Keywords: air jet loom, twist per inch, twist loss, weft yarn
Procedia PDF Downloads 4031092 Traffic Forecasting for Open Radio Access Networks Virtualized Network Functions in 5G Networks
Authors: Khalid Ali, Manar Jammal
Abstract:
In order to meet the stringent latency and reliability requirements of the upcoming 5G networks, Open Radio Access Networks (O-RAN) have been proposed. The virtualization of O-RAN has allowed it to be treated as a Network Function Virtualization (NFV) architecture, while its components are considered Virtualized Network Functions (VNFs). Hence, intelligent Machine Learning (ML) based solutions can be utilized to apply different resource management and allocation techniques on O-RAN. However, intelligently allocating resources for O-RAN VNFs can prove challenging due to the dynamicity of traffic in mobile networks. Network providers need to dynamically scale the allocated resources in response to the incoming traffic. Elastically allocating resources can provide a higher level of flexibility in the network in addition to reducing the OPerational EXpenditure (OPEX) and increasing the resources utilization. Most of the existing elastic solutions are reactive in nature, despite the fact that proactive approaches are more agile since they scale instances ahead of time by predicting the incoming traffic. In this work, we propose and evaluate traffic forecasting models based on the ML algorithm. The algorithms aim at predicting future O-RAN traffic by using previous traffic data. Detailed analysis of the traffic data was carried out to validate the quality and applicability of the traffic dataset. Hence, two ML models were proposed and evaluated based on their prediction capabilities.Keywords: O-RAN, traffic forecasting, NFV, ARIMA, LSTM, elasticity
Procedia PDF Downloads 2261091 Evolving Convolutional Filter Using Genetic Algorithm for Image Classification
Authors: Rujia Chen, Ajit Narayanan
Abstract:
Convolutional neural networks (CNN), as typically applied in deep learning, use layer-wise backpropagation (BP) to construct filters and kernels for feature extraction. Such filters are 2D or 3D groups of weights for constructing feature maps at subsequent layers of the CNN and are shared across the entire input. BP as a gradient descent algorithm has well-known problems of getting stuck at local optima. The use of genetic algorithms (GAs) for evolving weights between layers of standard artificial neural networks (ANNs) is a well-established area of neuroevolution. In particular, the use of crossover techniques when optimizing weights can help to overcome problems of local optima. However, the application of GAs for evolving the weights of filters and kernels in CNNs is not yet an established area of neuroevolution. In this paper, a GA-based filter development algorithm is proposed. The results of the proof-of-concept experiments described in this paper show the proposed GA algorithm can find filter weights through evolutionary techniques rather than BP learning. For some simple classification tasks like geometric shape recognition, the proposed algorithm can achieve 100% accuracy. The results for MNIST classification, while not as good as possible through standard filter learning through BP, show that filter and kernel evolution warrants further investigation as a new subarea of neuroevolution for deep architectures.Keywords: neuroevolution, convolutional neural network, genetic algorithm, filters, kernels
Procedia PDF Downloads 1861090 The Efficiency of AFLP and ISSR Markers in Genetic Diversity Estimation and Gene Pool Classification of Iranian Landrace Bread Wheat (Triticum Aestivum L.) Germplasm
Authors: Reza Talebi
Abstract:
Wheat (Triticum aestivum) is one of the most important food staples in Iran. Understanding genetic variability among the landrace wheat germplasm is important for breeding. Landraces endemic to Iran are a genetic resource that is distinct from other wheat germplasm. In this study, 60 Iranian landrace wheat accessions were characterized AFLP and ISSR markers. Twelve AFLP primer pairs detected 128 polymorphic bands among the sixty genotypes. The mean polymorphism rate based on AFLP data was 31%; however, a wide polymorphism range among primer pairs was observed (22–40%). Polymorphic information content (PIC value) calculated to assess the informativeness of each marker ranged from 0.28 to 0.4, with a mean of 0.37. According to AFLP molecular data, cluster analysis grouped the genotypes in five distinct clusters. .ISSR markers generated 68 bands (average of 6 bands per primer), which 31 were polymorphic (45%) across the 60 wheat genotypes. Polymorphism information content (PIC) value for ISSR markers was calculated in the range of 0.14 to 0.48 with an average of 0.33. Based on data achieved by ISSR-PCR, cluster analysis grouped the genotypes in three distinct clusters. Both AFLP and ISSR markers able to showed that high level of genetic diversity in Iranian landrace wheat accessions has maintained a relatively constant level of genetic diversity during last years.Keywords: wheat, genetic diversity, AFLP, ISSR
Procedia PDF Downloads 4511089 Estimation of the Seismic Response Modification Coefficient in the Superframe Structural System
Authors: Ali Reza Ghanbarnezhad Ghazvini, Seyyed Hamid Reza Mosayyebi
Abstract:
In recent years, an earthquake has occurred approximately every five years in certain regions of Iran. To mitigate the impact of these seismic events, it is crucial to identify and thoroughly assess the vulnerability of buildings and infrastructure, ensuring their safety through principled reinforcement. By adopting new methods of risk assessment, we can effectively reduce the potential risks associated with future earthquakes. In our research, we have observed that the coefficient of behavior in the fourth chapter is 1.65 for the initial structure and 1.72 for the Superframe structure. This indicates that the Superframe structure can enhance the strength of the main structural members by approximately 10% through the utilization of super beams. Furthermore, based on the comparative analysis between the two structures conducted in this study, we have successfully designed a stronger structure with minimal changes in the coefficient of behavior. Additionally, this design has allowed for greater energy dissipation during seismic events, further enhancing the structure's resilience to earthquakes. By comprehensively examining and reinforcing the vulnerability of buildings and infrastructure, along with implementing advanced risk assessment techniques, we can significantly reduce casualties and damages caused by earthquakes in Iran. The findings of this study offer valuable insights for civil engineering professionals in the field of structural engineering, aiding them in designing safer and more resilient structures.Keywords: modal pushover analysis, response modification factor, high-strength concrete, concrete shear walls, high-rise building
Procedia PDF Downloads 1431088 Radial Basis Surrogate Model Integrated to Evolutionary Algorithm for Solving Computation Intensive Black-Box Problems
Authors: Abdulbaset Saad, Adel Younis, Zuomin Dong
Abstract:
For design optimization with high-dimensional expensive problems, an effective and efficient optimization methodology is desired. This work proposes a series of modification to the Differential Evolution (DE) algorithm for solving computation Intensive Black-Box Problems. The proposed methodology is called Radial Basis Meta-Model Algorithm Assisted Differential Evolutionary (RBF-DE), which is a global optimization algorithm based on the meta-modeling techniques. A meta-modeling assisted DE is proposed to solve computationally expensive optimization problems. The Radial Basis Function (RBF) model is used as a surrogate model to approximate the expensive objective function, while DE employs a mechanism to dynamically select the best performing combination of parameters such as differential rate, cross over probability, and population size. The proposed algorithm is tested on benchmark functions and real life practical applications and problems. The test results demonstrate that the proposed algorithm is promising and performs well compared to other optimization algorithms. The proposed algorithm is capable of converging to acceptable and good solutions in terms of accuracy, number of evaluations, and time needed to converge.Keywords: differential evolution, engineering design, expensive computations, meta-modeling, radial basis function, optimization
Procedia PDF Downloads 3971087 Modelling Conceptual Quantities Using Support Vector Machines
Authors: Ka C. Lam, Oluwafunmibi S. Idowu
Abstract:
Uncertainty in cost is a major factor affecting performance of construction projects. To our knowledge, several conceptual cost models have been developed with varying degrees of accuracy. Incorporating conceptual quantities into conceptual cost models could improve the accuracy of early predesign cost estimates. Hence, the development of quantity models for estimating conceptual quantities of framed reinforced concrete structures using supervised machine learning is the aim of the current research. Using measured quantities of structural elements and design variables such as live loads and soil bearing pressures, response and predictor variables were defined and used for constructing conceptual quantities models. Twenty-four models were developed for comparison using a combination of non-parametric support vector regression, linear regression, and bootstrap resampling techniques. R programming language was used for data analysis and model implementation. Gross soil bearing pressure and gross floor loading were discovered to have a major influence on the quantities of concrete and reinforcement used for foundations. Building footprint and gross floor loading had a similar influence on beams and slabs. Future research could explore the modelling of other conceptual quantities for walls, finishes, and services using machine learning techniques. Estimation of conceptual quantities would assist construction planners in early resource planning and enable detailed performance evaluation of early cost predictions.Keywords: bootstrapping, conceptual quantities, modelling, reinforced concrete, support vector regression
Procedia PDF Downloads 2061086 Determination of Nutritional Value and Steroidal Saponin of Fenugreek Genotypes
Authors: Anita Singh, Richa Naula, Manoj Raghav
Abstract:
Nutrient rich and high-yielding varieties of fenugreek can be developed by using genotypes which are naturally high in nutrients. Gene banks harbour scanty germplasm collection of Trigonella spp. and a very little background information about its genetic diversity. The extent of genetic diversity in a specific breeding population depends upon the genotype included in it. The present investigation aims at the estimation of macronutrient (phosphorus by spectrophotometer and potassium by flame photometer), micronutrients, namely, iron, zinc, manganese, and copper from seeds of fenugreek genotypes using atomic absorption spectrophotometer, protein by Rapid N Cube Analyser and Steroidal Saponins. Twenty-eight genotypes of fenugreek along with two standard checks, namely, Pant Ragini and Pusa Early Bunching were collected from different parts of India, and nutrient contents of each genotype were determined at G. B. P. U. A. & T. Laboratory, Pantnagar. Highest potassium content was observed in PFG-35 (1207 mg/100g). PFG-37 and PFG-20 were richest in phosphorus, iron and manganese content among all the genotypes. The lowest zinc content was found in PFG-26 (1.19 mg/100g), while the maximum zinc content was found in PFG- 28 (4.43 mg/100g). The highest content of copper was found in PFG-26 (1.97 mg/100g). PFG-39 has the highest protein content (29.60 %). Significant differences were observed in the steroidal saponin among the genotypes. Saponin content ranged from 0.38 g/100g to 1.31 g/100g. Steroidal Saponins content was found the maximum in PFG-36 (1.31 g/100g) followed by PFG-17 (1.28 g/100g). Therefore, the genotypes which are rich in nutrient and oil content can be used for plant biofortification, dietary supplements, and herbal products.Keywords: genotypes, macronutrients, micronutrient, protein, seeds
Procedia PDF Downloads 2541085 Comparison Study of Machine Learning Classifiers for Speech Emotion Recognition
Authors: Aishwarya Ravindra Fursule, Shruti Kshirsagar
Abstract:
In the intersection of artificial intelligence and human-centered computing, this paper delves into speech emotion recognition (SER). It presents a comparative analysis of machine learning models such as K-Nearest Neighbors (KNN),logistic regression, support vector machines (SVM), decision trees, ensemble classifiers, and random forests, applied to SER. The research employs four datasets: Crema D, SAVEE, TESS, and RAVDESS. It focuses on extracting salient audio signal features like Zero Crossing Rate (ZCR), Chroma_stft, Mel Frequency Cepstral Coefficients (MFCC), root mean square (RMS) value, and MelSpectogram. These features are used to train and evaluate the models’ ability to recognize eight types of emotions from speech: happy, sad, neutral, angry, calm, disgust, fear, and surprise. Among the models, the Random Forest algorithm demonstrated superior performance, achieving approximately 79% accuracy. This suggests its suitability for SER within the parameters of this study. The research contributes to SER by showcasing the effectiveness of various machine learning algorithms and feature extraction techniques. The findings hold promise for the development of more precise emotion recognition systems in the future. This abstract provides a succinct overview of the paper’s content, methods, and results.Keywords: comparison, ML classifiers, KNN, decision tree, SVM, random forest, logistic regression, ensemble classifiers
Procedia PDF Downloads 451084 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution
Authors: Najrullah Khan, Athar Ali Khan
Abstract:
The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation
Procedia PDF Downloads 5351083 Three Tier Indoor Localization System for Digital Forensics
Authors: Dennis L. Owuor, Okuthe P. Kogeda, Johnson I. Agbinya
Abstract:
Mobile localization has attracted a great deal of attention recently due to the introduction of wireless networks. Although several localization algorithms and systems have been implemented and discussed in the literature, very few researchers have exploited the gap that exists between indoor localization, tracking, external storage of location information and outdoor localization for the purpose of digital forensics during and after a disaster. The contribution of this paper lies in the implementation of a robust system that is capable of locating, tracking mobile device users and store location information for both indoor and partially outdoor the cloud. The system can be used during disaster to track and locate mobile phone users. The developed system is a mobile application built based on Android, Hypertext Preprocessor (PHP), Cascading Style Sheets (CSS), JavaScript and MATLAB for the Android mobile users. Using Waterfall model of software development, we have implemented a three level system that is able to track, locate and store mobile device information in secure database (cloud) on almost a real time basis. The outcome of the study showed that the developed system is efficient with regard to the tracking and locating mobile devices. The system is also flexible, i.e. can be used in any building with fewer adjustments. Finally, the system is accurate for both indoor and outdoor in terms of locating and tracking mobile devices.Keywords: indoor localization, digital forensics, fingerprinting, tracking and cloud
Procedia PDF Downloads 3371082 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets
Authors: Kothuri Sriraman, Mattupalli Komal Teja
Abstract:
In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm
Procedia PDF Downloads 3491081 Formulation and Anticancer Evaluation of Beta-Sitosterol in Henna Methanolic Extract Embedded in Controlled Release Nanocomposite
Authors: Sanjukta Badhai, Durga Barik, Bairagi C. Mallick
Abstract:
In the present study, Beta-Sitosterol in Lawsonia methanolic leaf extract embedded in controlled release nanocomposite was prepared and evaluated for in vivo anticancer efficacy in dimethyl hydrazine (DMH) induced colon cancer. In the present study, colon cancer was induced by s.c injection of DMH (20 mg/kg b.wt) for 15 weeks. The animals were divided into five groups as follows control, DMH alone, DMH and Beta Sitosterol nanocomposite (50mg/kg), DMH and Beta Sitosterol nanocomposite (100 mg/kg) and DMH and Standard Silymarin (100mg/kg) and the treatment was carried out for 15 weeks. At the end of the study period, the blood was withdrawn, and serum was separated for haematological, biochemical analysis and tumor markers. Further, the colonic tissue was removed for the estimation of antioxidants and histopathological analysis. The results of the study displays that DMH intoxication elicits altered haematological parameters (RBC,WBC, and Hb), elevated lipid peroxidation and decreased antioxidants level (SOD, CAT, GPX, GST and GSH), elevated lipid profiles (cholesterol and triglycerides), tumor markers (CEA and AFP) and altered colonic tissue histology. Meanwhile, treatment with Beta Sitosterol nanocomposites significantly restored the altered biochemicals parameters in DMH induced colon cancer mediated by its anticancer efficacy. Further, Beta Sitosterol nanocomposite (100 mg/kg) showed marked efficacy.Keywords: nanocomposites, herbal formulation, henna, beta sitosterol, colon cancer, dimethyl hydrazine, antioxidant, lipid peroxidation
Procedia PDF Downloads 1641080 Machine Learning for Targeting of Conditional Cash Transfers: Improving the Effectiveness of Proxy Means Tests to Identify Future School Dropouts and the Poor
Authors: Cristian Crespo
Abstract:
Conditional cash transfers (CCTs) have been targeted towards the poor. Thus, their targeting assessments check whether these schemes have been allocated to low-income households or individuals. However, CCTs have more than one goal and target group. An additional goal of CCTs is to increase school enrolment. Hence, students at risk of dropping out of school also are a target group. This paper analyses whether one of the most common targeting mechanisms of CCTs, a proxy means test (PMT), is suitable to identify the poor and future school dropouts. The PMT is compared with alternative approaches that use the outputs of a predictive model of school dropout. This model was built using machine learning algorithms and rich administrative datasets from Chile. The paper shows that using machine learning outputs in conjunction with the PMT increases targeting effectiveness by identifying more students who are either poor or future dropouts. This joint targeting approach increases effectiveness in different scenarios except when the social valuation of the two target groups largely differs. In these cases, the most likely optimal approach is to solely adopt the targeting mechanism designed to find the highly valued group.Keywords: conditional cash transfers, machine learning, poverty, proxy means tests, school dropout prediction, targeting
Procedia PDF Downloads 205