Search results for: Grey prediction model
16545 Finding Data Envelopment Analysis Targets Using Multi-Objective Programming in DEA-R with Stochastic Data
Authors: R. Shamsi, F. Sharifi
Abstract:
In this paper, we obtain the projection of inefficient units in data envelopment analysis (DEA) in the case of stochastic inputs and outputs using the multi-objective programming (MOP) structure. In some problems, the inputs might be stochastic while the outputs are deterministic, and vice versa. In such cases, we propose a multi-objective DEA-R model because in some cases (e.g., when unnecessary and irrational weights by the BCC model reduce the efficiency score), an efficient decision-making unit (DMU) is introduced as inefficient by the BCC model, whereas the DMU is considered efficient by the DEA-R model. In some other cases, only the ratio of stochastic data may be available (e.g., the ratio of stochastic inputs to stochastic outputs). Thus, we provide a multi-objective DEA model without explicit outputs and prove that the input-oriented MOP DEA-R model in the invariable return to scale case can be replaced by the MOP-DEA model without explicit outputs in the variable return to scale and vice versa. Using the interactive methods for solving the proposed model yields a projection corresponding to the viewpoint of the DM and the analyst, which is nearer to reality and more practical. Finally, an application is provided.Keywords: DEA-R, multi-objective programming, stochastic data, data envelopment analysis
Procedia PDF Downloads 10616544 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 1416543 Application of Generalized Autoregressive Score Model to Stock Returns
Authors: Katleho Daniel Makatjane, Diteboho Lawrence Xaba, Ntebogang Dinah Moroke
Abstract:
The current study investigates the behaviour of time-varying parameters that are based on the score function of the predictive model density at time t. The mechanism to update the parameters over time is the scaled score of the likelihood function. The results revealed that there is high persistence of time-varying, as the location parameter is higher and the skewness parameter implied the departure of scale parameter from the normality with the unconditional parameter as 1.5. The results also revealed that there is a perseverance of the leptokurtic behaviour in stock returns which implies the returns are heavily tailed. Prior to model estimation, the White Neural Network test exposed that the stock price can be modelled by a GAS model. Finally, we proposed further researches specifically to model the existence of time-varying parameters with a more detailed model that encounters the heavy tail distribution of the series and computes the risk measure associated with the returns.Keywords: generalized autoregressive score model, South Africa, stock returns, time-varying
Procedia PDF Downloads 50216542 Food Security Indicators in Deltaic and Coastal Research: A Scoping Review
Authors: Sylvia Szabo, Thilini Navaratne, Indrajit Pal, Seree Park
Abstract:
Deltaic and coastal regions are often strategically important both from local and regional perspectives. While deltas are known to be bread baskets of the world, delta inhabitants often face the risk of food and nutritional insecurity. These risks are highly exacerbated by the impacts of climate and environmental change. While numerous regional studies examined the prevalence and the determinants of food security in specific delta and coastal regions, there is still a lack of a systematic analysis of the most widely used scientific food security indicators. In order to fill this gap, a systematic review was carried out using Covidence, a Cochrane-adopted systematic review processing software. Papers included in the review were selected from the SCOPUS, Thomson Reuters Web of Science, Science Direct, ProQuest, and Google Scholar databases. Both scientific papers and grey literature (e.g., reports by international organizations) were considered. The results were analyzed by food security components (access, availability, quality, and strategy) and by world regions. Suggestions for further food security, nutrition, and health research, as well as policy-related implications, are also discussed.Keywords: delta regions, coastal, food security, indicators, systematic review
Procedia PDF Downloads 24316541 Reduction of Rotor-Bearing-Support Finite Element Model through Substructuring
Authors: Abdur Rosyid, Mohamed El-Madany, Mohanad Alata
Abstract:
Due to simplicity and low cost, rotordynamic system is often modeled by using lumped parameters. Recently, finite elements have been used to model rotordynamic system as it offers higher accuracy. However, it involves high degrees of freedom. In some applications such as control design, this requires higher cost. For this reason, various model reduction methods have been proposed. This work demonstrates the quality of model reduction of rotor-bearing-support system through substructuring. The quality of the model reduction is evaluated by comparing some first natural frequencies, modal damping ratio, critical speeds and response of both the full system and the reduced system. The simulation shows that the substructuring is proven adequate to reduce finite element rotor model in the frequency range of interest as long as the numbers and the locations of master nodes are determined appropriately. However, the reduction is less accurate in an unstable or nearly-unstable system.Keywords: rotordynamic, finite element model, timoshenko beam, 3D solid elements, Guyan reduction method
Procedia PDF Downloads 27316540 A Unified Model for Orotidine Monophosphate Synthesis: Target for Inhibition of Growth of Mycobacterium tuberculosis
Authors: N. Naga Subrahmanyeswara Rao, Parag Arvind Deshpande
Abstract:
Understanding nucleotide synthesis reaction of any organism is beneficial to know the growth of it as in Mycobacterium tuberculosis to design anti TB drug. One of the reactions of de novo pathway which takes place in all organisms was considered. The reaction takes places between phosphoribosyl pyrophosphate and orotate catalyzed by orotate phosphoribosyl transferase and divalent metal ion gives orotdine monophosphate, a nucleotide. All the reaction steps of three experimentally proposed mechanisms for this reaction were considered to develop kinetic rate expression. The model was validated using the data for four organisms. This model could successfully describe the kinetics for the reported data. The developed model can serve as a reliable model to describe the kinetics in new organisms without the need of mechanistic determination. So an organism-independent model was developed.Keywords: mechanism, nucleotide, organism, tuberculosis
Procedia PDF Downloads 33516539 Controlling the Expense of Political Contests Using a Modified N-Players Tullock’s Model
Abstract:
This work introduces a generalization of the classical Tullock’s model of one-stage contests under complete information with multiple unlimited numbers of contestants. In classical Tullock’s model, the contest winner is not necessarily the highest bidder. Instead, the winner is determined according to a draw in which the winning probabilities are the relative contestants’ efforts. The Tullock modeling fits well political contests, in which the winner is not necessarily the highest effort contestant. This work presents a modified model which uses a simple non-discriminating rule, namely, a parameter to influence the total costs planned for an election, for example, the contest designer can control the contestants' efforts. The winner pays a fee, and the losers are reimbursed the same amount. Our proposed model includes a mechanism that controls the efforts exerted and balances competition, creating a tighter, less predictable and more interesting contest. Additionally, the proposed model follows the fairness criterion in the sense that it does not alter the contestants' probabilities of winning compared to the classic Tullock’s model. We provide an analytic solution for the contestant's optimal effort and expected reward.Keywords: contests, Tullock's model, political elections, control expenses
Procedia PDF Downloads 14516538 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation
Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke
Abstract:
Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform
Procedia PDF Downloads 30916537 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning
Authors: Xingyu Gao, Qiang Wu
Abstract:
Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.Keywords: patent influence, interpretable machine learning, predictive models, SHAP
Procedia PDF Downloads 5016536 Fama French Four Factor Model: A Study of Nifty Fifty Companies
Authors: Deeksha Arora
Abstract:
The study aims to explore the applicability of the widely used asset pricing models, namely, Capital Asset Pricing Model (CAPM) and the Fama-French Four Factor Model in the Indian equity market. The study will be based on the companies that form part of the Nifty Fifty Index for a period of five years: 2011 to 2016. The asset pricing model is examined by forming portfolios on the basis of three variables – market capitalization (size effect), book-to-market equity ratio (value effect) and profitability. The study provides a basis to test the presence of the Fama-French Four factor model in Indian stock market. This study may provide a basis for future research in the generalized asset pricing model comprising of multiple risk factors.Keywords: book to market equity, Fama French four factor model, market capitalization, profitability, size effect, value effect
Procedia PDF Downloads 26316535 The Effectiveness of a Hybrid Diffie-Hellman-RSA-Advanced Encryption Standard Model
Authors: Abdellahi Cheikh
Abstract:
With the emergence of quantum computers with very powerful capabilities, the security of the exchange of shared keys between two interlocutors poses a big problem in terms of the rapid development of technologies such as computing power and computing speed. Therefore, the Diffie-Hellmann (DH) algorithm is more vulnerable than ever. No mechanism guarantees the security of the key exchange, so if an intermediary manages to intercept it, it is easy to intercept. In this regard, several studies have been conducted to improve the security of key exchange between two interlocutors, which has led to interesting results. The modification made on our model Diffie-Hellman-RSA-AES (DRA), which encrypts the information exchanged between two users using the three-encryption algorithms DH, RSA and AES, by using stenographic photos to hide the contents of the p, g and ClesAES values that are sent in an unencrypted state at the level of DRA model to calculate each user's public key. This work includes a comparative study between the DRA model and all existing solutions, as well as the modification made to this model, with an emphasis on the aspect of reliability in terms of security. This study presents a simulation to demonstrate the effectiveness of the modification made to the DRA model. The obtained results show that our model has a security advantage over the existing solution, so we made these changes to reinforce the security of the DRA model.Keywords: Diffie-Hellmann, DRA, RSA, advanced encryption standard
Procedia PDF Downloads 9416534 Feature-Based Summarizing and Ranking from Customer Reviews
Authors: Dim En Nyaung, Thin Lai Lai Thein
Abstract:
Due to the rapid increase of Internet, web opinion sources dynamically emerge which is useful for both potential customers and product manufacturers for prediction and decision purposes. These are the user generated contents written in natural languages and are unstructured-free-texts scheme. Therefore, opinion mining techniques become popular to automatically process customer reviews for extracting product features and user opinions expressed over them. Since customer reviews may contain both opinionated and factual sentences, a supervised machine learning technique applies for subjectivity classification to improve the mining performance. In this paper, we dedicate our work is the task of opinion summarization. Therefore, product feature and opinion extraction is critical to opinion summarization, because its effectiveness significantly affects the identification of semantic relationships. The polarity and numeric score of all the features are determined by Senti-WordNet Lexicon. The problem of opinion summarization refers how to relate the opinion words with respect to a certain feature. Probabilistic based model of supervised learning will improve the result that is more flexible and effective.Keywords: opinion mining, opinion summarization, sentiment analysis, text mining
Procedia PDF Downloads 33216533 Project Management Agile Model Based on Project Management Body of Knowledge Guideline
Authors: Mehrzad Abdi Khalife, Iraj Mahdavi
Abstract:
This paper presents the agile model for project management process. For project management process, the Project Management Body of Knowledge (PMBOK) guideline has been selected as platform. Combination of computational science and artificial intelligent methodology has been added to the guideline to transfer the standard to agile project management process. The model is the combination of practical standard, computational science and artificial intelligent. In this model, we present communication model and protocols to keep process agile. Here, we illustrate the collaboration man and machine in project management area with artificial intelligent approach.Keywords: artificial intelligent, conceptual model, man-machine collaboration, project management, standard
Procedia PDF Downloads 34216532 Parameter Estimation for the Oral Minimal Model and Parameter Distinctions Between Obese and Non-obese Type 2 Diabetes
Authors: Manoja Rajalakshmi Aravindakshana, Devleena Ghosha, Chittaranjan Mandala, K. V. Venkateshb, Jit Sarkarc, Partha Chakrabartic, Sujay K. Maity
Abstract:
Oral Glucose Tolerance Test (OGTT) is the primary test used to diagnose type 2 diabetes mellitus (T2DM) in a clinical setting. Analysis of OGTT data using the Oral Minimal Model (OMM) along with the rate of appearance of ingested glucose (Ra) is performed to study differences in model parameters for control and T2DM groups. The differentiation of parameters of the model gives insight into the behaviour and physiology of T2DM. The model is also studied to find parameter differences among obese and non-obese T2DM subjects and the sensitive parameters were co-related to the known physiological findings. Sensitivity analysis is performed to understand changes in parameter values with model output and to support the findings, appropriate statistical tests are done. This seems to be the first preliminary application of the OMM with obesity as a distinguishing factor in understanding T2DM from estimated parameters of insulin-glucose model and relating the statistical differences in parameters to diabetes pathophysiology.Keywords: oral minimal model, OGTT, obese and non-obese T2DM, mathematical modeling, parameter estimation
Procedia PDF Downloads 9316531 Automatic Post Stroke Detection from Computed Tomography Images
Authors: C. Gopi Jinimole, A. Harsha
Abstract:
For detecting strokes, Computed Tomography (CT) scan is preferred for imaging the abnormalities or infarction in the brain. Because of the problems in the window settings used to evaluate brain CT images, they are very poor in the early stage infarction detection. This paper presents an automatic estimation method for the window settings of the CT images for proper contrast of the hyper infarction present in the brain. In the proposed work the window width is estimated automatically for each slice and the window centre is changed to a new value of 31HU, which is the average of the HU values of the grey matter and white matter in the brain. The automatic window width estimation is based on the average of median of statistical central moments. Thus with the new suggested window centre and estimated window width, the hyper infarction or post-stroke regions in CT brain images are properly detected. The proposed approach assists the radiologists in CT evaluation for early quantitative signs of delayed stroke, which leads to severe hemorrhage in the future can be prevented by providing timely medication to the patients.Keywords: computed tomography (CT), hyper infarction or post stroke region, Hounsefield Unit (HU), window centre (WC), window width (WW)
Procedia PDF Downloads 20316530 Modelling Fluoride Pollution of Groundwater Using Artificial Neural Network in the Western Parts of Jharkhand
Authors: Neeta Kumari, Gopal Pathak
Abstract:
Artificial neural network has been proved to be an efficient tool for non-parametric modeling of data in various applications where output is non-linearly associated with input. It is a preferred tool for many predictive data mining applications because of its power , flexibility, and ease of use. A standard feed forward networks (FFN) is used to predict the groundwater fluoride content. The ANN model is trained using back propagated algorithm, Tansig and Logsig activation function having varying number of neurons. The models are evaluated on the basis of statistical performance criteria like Root Mean Squarred Error (RMSE) and Regression coefficient (R2), bias (mean error), Coefficient of variation (CV), Nash-Sutcliffe efficiency (NSE), and the index of agreement (IOA). The results of the study indicate that Artificial neural network (ANN) can be used for groundwater fluoride prediction in the limited data situation in the hard rock region like western parts of Jharkhand with sufficiently good accuracy.Keywords: Artificial neural network (ANN), FFN (Feed-forward network), backpropagation algorithm, Levenberg-Marquardt algorithm, groundwater fluoride contamination
Procedia PDF Downloads 55116529 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology
Procedia PDF Downloads 8216528 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis
Authors: H. Jung, N. Kim, B. Kang, J. Choe
Abstract:
History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.Keywords: history matching, principal component analysis, reservoir modelling, support vector machine
Procedia PDF Downloads 16016527 Role of Pulp Volume Method in Assessment of Age and Gender in Lucknow, India, an Observational Study
Authors: Anurag Tripathi, Sanad Khandelwal
Abstract:
Age and gender determination are required in forensic for victim identification. There is secondary dentine deposition throughout life, resulting in decreased pulp volume and size. Evaluation of pulp volume using Cone Beam Computed Tomography (CBCT)is a noninvasive method to evaluate the age and gender of an individual. The study was done to evaluate the efficacy of pulp volume method in the determination of age and gender.Aims/Objectives: The study was conducted to estimate age and determine sex by measuring tooth pulp volume with the help of CBCT. An observational study of one year duration on CBCT data of individuals was conducted in Lucknow. Maxillary central incisors (CI) and maxillary canine (C) of the randomly selected samples were assessed for measurement of pulp volume using a software. Statistical analysis: Chi Square Test, Arithmetic Mean, Standard deviation, Pearson’s Correlation, Linear & Logistic regression analysis. Results: The CBCT data of Ninety individuals with age range between 18-70 years was evaluated for pulp volume of central incisor and canine (CI & C). The Pearson correlation coefficient between the tooth pulp volume (CI & C) and chronological age suggested that pulp volume decreased with age. The validation of the equations for sex determination showed higher prediction accuracy for CI (56.70%) and lower for C (53.30%).Conclusion: Pulp volume obtained from CBCT is a reliable indicator for age estimation and gender prediction.Keywords: forensic, dental age, pulp volume, cone beam computed tomography
Procedia PDF Downloads 10016526 Model Order Reduction Using Hybrid Genetic Algorithm and Simulated Annealing
Authors: Khaled Salah
Abstract:
Model order reduction has been one of the most challenging topics in the past years. In this paper, a hybrid solution of genetic algorithm (GA) and simulated annealing algorithm (SA) are used to approximate high-order transfer functions (TFs) to lower-order TFs. In this approach, hybrid algorithm is applied to model order reduction putting in consideration improving accuracy and preserving the properties of the original model which are two important issues for improving the performance of simulation and computation and maintaining the behavior of the original complex models being reduced. Compared to conventional mathematical methods that have been used to obtain a reduced order model of high order complex models, our proposed method provides better results in terms of reducing run-time. Thus, the proposed technique could be used in electronic design automation (EDA) tools.Keywords: genetic algorithm, simulated annealing, model reduction, transfer function
Procedia PDF Downloads 14316525 Dispersion Rate of Spilled Oil in Water Column under Non-Breaking Water Waves
Authors: Hanifeh Imanian, Morteza Kolahdoozan
Abstract:
The purpose of this study is to present a mathematical phrase for calculating the dispersion rate of spilled oil in water column under non-breaking waves. In this regard, a multiphase numerical model is applied for which waves and oil phase were computed concurrently, and accuracy of its hydraulic calculations have been proven. More than 200 various scenarios of oil spilling in wave waters were simulated using the multiphase numerical model and its outcome were collected in a database. The recorded results were investigated to identify the major parameters affected vertical oil dispersion and finally 6 parameters were identified as main independent factors. Furthermore, some statistical tests were conducted to identify any relationship between the dependent variable (dispersed oil mass in the water column) and independent variables (water wave specifications containing height, length and wave period and spilled oil characteristics including density, viscosity and spilled oil mass). Finally, a mathematical-statistical relationship is proposed to predict dispersed oil in marine waters. To verify the proposed relationship, a laboratory example available in the literature was selected. Oil mass rate penetrated in water body computed by statistical regression was in accordance with experimental data was predicted. On this occasion, it was necessary to verify the proposed mathematical phrase. In a selected laboratory case available in the literature, mass oil rate penetrated in water body computed by suggested regression. Results showed good agreement with experimental data. The validated mathematical-statistical phrase is a useful tool for oil dispersion prediction in oil spill events in marine areas.Keywords: dispersion, marine environment, mathematical-statistical relationship, oil spill
Procedia PDF Downloads 23416524 Towards an Enhanced Compartmental Model for Profiling Malware Dynamics
Authors: Jessemyn Modiini, Timothy Lynar, Elena Sitnikova
Abstract:
We present a novel enhanced compartmental model for malware spread analysis in cyber security. This paper applies cyber security data features to epidemiological compartmental models to model the infectious potential of malware. Compartmental models are most efficient for calculating the infectious potential of a disease. In this paper, we discuss and profile epidemiologically relevant data features from a Domain Name System (DNS) dataset. We then apply these features to epidemiological compartmental models to network traffic features. This paper demonstrates how epidemiological principles can be applied to the novel analysis of key cybersecurity behaviours and trends and provides insight into threat modelling above that of kill-chain analysis. In applying deterministic compartmental models to a cyber security use case, the authors analyse the deficiencies and provide an enhanced stochastic model for cyber epidemiology. This enhanced compartmental model (SUEICRN model) is contrasted with the traditional SEIR model to demonstrate its efficacy.Keywords: cybersecurity, epidemiology, cyber epidemiology, malware
Procedia PDF Downloads 11016523 Study the Multifaceted Therapeutic Properties of the IQGAP1shRNA Plasmid on Rat Liver Cancer Model
Authors: Khairy M. A. Zoheir, Nehma A. Ali, Ahmed M. Darwish, Mohamed S. Kishta, Ahmed A. Abd-Rabou, Mohamed A. Abdelhafez, Karima F. Mahrous
Abstract:
The study comprehensively investigated the multifaceted therapeutic properties of the IQGAP1shRNA plasmid, encompassing its hepatoprotective, immunomodulatory, and anticancer activities. The study employed a Prednisolone-induced immunosuppressed rat model to assess the hepatoprotective and immunomodulatory effects of IQGAP1shRNA plasmid. Using this model, IQGAP1shRNA plasmid was found to modulate haematopoiesis, improving RBC, platelet, and WBC counts, underscoring its potential in hematopoietic homeostasis. Organ atrophy, a hallmark of immunosuppression in spleen, heart, liver, ovaries, and kidneys, was reversed with IQGAP1shRNA plasmid treatment, reinforcing its hepatotrophic and organotropic capabilities. Elevated hepatic biomarkers (ALT, AST, ALP, LPO) indicative of hepatocellular injury and oxidative stress were reduced with GST, highlighting its hepatoprotective and antioxidative effects. IQGAP1shRNA plasmid also restored depleted antioxidants (GSH and SOD), emphasizing its potent antioxidative and free radical scavenging capabilities. Molecular insights into immune dysregulation revealed downregulation of IQGAP1, IQGAP3 interleukin-2 (IL-2), and interleukin-4 (IL-4) mRNA expression in the liver of immunosuppressed rats. IL-2 and IL-4 play pivotal roles in immune regulation, T-cell activation, and B-cell differentiation. Notably, treatment with IQGAP1shRNA plasmid exhibited a significant upregulation of IL-2 and IL-4 mRNA expression, thereby accentuating its immunomodulatory potential in orchestrating immune homeostasis. Additionally, immune dysregulation was associated with increased levels of TNF-α. However, treatment with IQGAP1shRNA plasmid effectively decreased the levels of TNF-α, further underscoring its role in modulating inflammatory responses and restoring immune balance in immunosuppressed rats. Additionally, pharmacokinetics, bioavailability, drug-likeness, and toxicity risk assessment prediction suggest its potential as a pharmacologically favourable agent with no serious adverse effects. In conclusion, this study confirms the therapeutic potential of the IQGAP1shRNA plasmid, showcasing its effectiveness against hepatotoxicity, oxidative stress, immunosuppression, and its notable anticancer activity.Keywords: IQGAP1, shRNA, cancer, liver, rat
Procedia PDF Downloads 1516522 Traction Behavior of Linear Piezo-Viscous Lubricants in Rough Elastohydrodynamic Lubrication Contacts
Authors: Punit Kumar, Niraj Kumar
Abstract:
The traction behavior of lubricants with the linear pressure-viscosity response in EHL line contacts is investigated numerically for smooth as well as rough surfaces. The analysis involves the simultaneous solution of Reynolds, elasticity and energy equations along with the computation of lubricant properties and surface temperatures. The temperature modified Doolittle-Tait equations are used to calculate viscosity and density as functions of fluid pressure and temperature, while Carreau model is used to describe the lubricant rheology. The surface roughness is assumed to be sinusoidal and it is present on the nearly stationary surface in near-pure sliding EHL conjunction. The linear P-V oil is found to yield much lower traction coefficients and slightly thicker EHL films as compared to the synthetic oil for a given set of dimensionless speed and load parameters. Besides, the increase in traction coefficient attributed to surface roughness is much lower for the former case. The present analysis emphasizes the importance of employing realistic pressure-viscosity response for accurate prediction of EHL traction.Keywords: EHL, linear pressure-viscosity, surface roughness, traction, water/glycol
Procedia PDF Downloads 38416521 Hidden Oscillations in the Mathematical Model of the Optical Binary Phase Shift Keying (BPSK) Costas Loop
Authors: N. V. Kuznetsov, O. A. Kuznetsova, G. A. Leonov, M. V. Yuldashev, R. V. Yuldashev
Abstract:
Nonlinear analysis of the phase locked loop (PLL)-based circuits is a challenging task. Thus, the simulation is widely used for their study. In this work, we consider a mathematical model of the optical Costas loop and demonstrate the limitations of simulation approach related to the existence of so-called hidden oscillations in the phase space of the model.Keywords: optical Costas loop, mathematical model, simulation, hidden oscillation
Procedia PDF Downloads 44116520 Reference Model for the Implementation of an E-Commerce Solution in Peruvian SMEs in the Retail Sector
Authors: Julio Kauss, Miguel Cadillo, David Mauricio
Abstract:
E-commerce is a business model that allows companies to optimize the processes of buying, selling, transferring goods and exchanging services through computer networks or the Internet. In Peru, the electronic commerce is used infrequently. This situation is due, in part to the fact that there is no model that allows companies to implement an e-commerce solution, which means that most SMEs do not have adequate knowledge to adapt to electronic commerce. In this work, a reference model is proposed for the implementation of an e-commerce solution in Peruvian SMEs in the retail sector. It consists of five phases: Business Analysis, Business Modeling, Implementation, Post Implementation and Results. The present model was validated in a SME of the Peruvian retail sector through the implementation of an electronic commerce platform, through which the company increased its sales through the delivery channel by 10% in the first month of deployment. This result showed that the model is easy to implement, is economical and agile. In addition, it allowed the company to increase its business offer, adapt to e-commerce and improve customer loyalty.Keywords: e-commerce, retail, SMEs, reference model
Procedia PDF Downloads 32116519 Multicollinearity and MRA in Sustainability: Application of the Raise Regression
Authors: Claudia García-García, Catalina B. García-García, Román Salmerón-Gómez
Abstract:
Much economic-environmental research includes the analysis of possible interactions by using Moderated Regression Analysis (MRA), which is a specific application of multiple linear regression analysis. This methodology allows analyzing how the effect of one of the independent variables is moderated by a second independent variable by adding a cross-product term between them as an additional explanatory variable. Due to the very specification of the methodology, the moderated factor is often highly correlated with the constitutive terms. Thus, great multicollinearity problems arise. The appearance of strong multicollinearity in a model has important consequences. Inflated variances of the estimators may appear, there is a tendency to consider non-significant regressors that they probably are together with a very high coefficient of determination, incorrect signs of our coefficients may appear and also the high sensibility of the results to small changes in the dataset. Finally, the high relationship among explanatory variables implies difficulties in fixing the individual effects of each one on the model under study. These consequences shifted to the moderated analysis may imply that it is not worth including an interaction term that may be distorting the model. Thus, it is important to manage the problem with some methodology that allows for obtaining reliable results. After a review of those works that applied the MRA among the ten top journals of the field, it is clear that multicollinearity is mostly disregarded. Less than 15% of the reviewed works take into account potential multicollinearity problems. To overcome the issue, this work studies the possible application of recent methodologies to MRA. Particularly, the raised regression is analyzed. This methodology mitigates collinearity from a geometrical point of view: the collinearity problem arises because the variables under study are very close geometrically, so by separating both variables, the problem can be mitigated. Raise regression maintains the available information and modifies the problematic variables instead of deleting variables, for example. Furthermore, the global characteristics of the initial model are also maintained (sum of squared residuals, estimated variance, coefficient of determination, global significance test and prediction). The proposal is implemented to data from countries of the European Union during the last year available regarding greenhouse gas emissions, per capita GDP and a dummy variable that represents the topography of the country. The use of a dummy variable as the moderator is a special variant of MRA, sometimes called “subgroup regression analysis.” The main conclusion of this work is that applying new techniques to the field can improve in a substantial way the results of the analysis. Particularly, the use of raised regression mitigates great multicollinearity problems, so the researcher is able to rely on the interaction term when interpreting the results of a particular study.Keywords: multicollinearity, MRA, interaction, raise
Procedia PDF Downloads 10716518 Kinetic Façade Design Using 3D Scanning to Convert Physical Models into Digital Models
Authors: Do-Jin Jang, Sung-Ah Kim
Abstract:
In designing a kinetic façade, it is hard for the designer to make digital models due to its complex geometry with motion. This paper aims to present a methodology of converting a point cloud of a physical model into a single digital model with a certain topology and motion. The method uses a Microsoft Kinect sensor, and color markers were defined and applied to three paper folding-inspired designs. Although the resulted digital model cannot represent the whole folding range of the physical model, the method supports the designer to conduct a performance-oriented design process with the rough physical model in the reduced folding range.Keywords: design media, kinetic facades, tangible user interface, 3D scanning
Procedia PDF Downloads 41416517 A Large Language Model-Driven Method for Automated Building Energy Model Generation
Authors: Yake Zhang, Peng Xu
Abstract:
The development of building energy models (BEM) required for architectural design and analysis is a time-consuming and complex process, demanding a deep understanding and proficient use of simulation software. To streamline the generation of complex building energy models, this study proposes an automated method for generating building energy models using a large language model and the BEM library aimed at improving the efficiency of model generation. This method leverages a large language model to parse user-specified requirements for target building models, extracting key features such as building location, window-to-wall ratio, and thermal performance of the building envelope. The BEM library is utilized to retrieve energy models that match the target building’s characteristics, serving as reference information for the large language model to enhance the accuracy and relevance of the generated model, allowing for the creation of a building energy model that adapts to the user’s modeling requirements. This study enables the automatic creation of building energy models based on natural language inputs, reducing the professional expertise required for model development while significantly decreasing the time and complexity of manual configuration. In summary, this study provides an efficient and intelligent solution for building energy analysis and simulation, demonstrating the potential of a large language model in the field of building simulation and performance modeling.Keywords: artificial intelligence, building energy modelling, building simulation, large language model
Procedia PDF Downloads 2816516 An Improved Model of Estimation Global Solar Irradiation from in situ Data: Case of Oran Algeria Region
Authors: Houcine Naim, Abdelatif Hassini, Noureddine Benabadji, Alex Van Den Bossche
Abstract:
In this paper, two models to estimate the overall monthly average daily radiation on a horizontal surface were applied to the site of Oran (35.38 ° N, 0.37 °W). We present a comparison between the first one is a regression equation of the Angstrom type and the second model is developed by the present authors some modifications were suggested using as input parameters: the astronomical parameters as (latitude, longitude, and altitude) and meteorological parameters as (relative humidity). The comparisons are made using the mean bias error (MBE), root mean square error (RMSE), mean percentage error (MPE), and mean absolute bias error (MABE). This comparison shows that the second model is closer to the experimental values that the model of Angstrom.Keywords: meteorology, global radiation, Angstrom model, Oran
Procedia PDF Downloads 234