Search results for: factor models
9981 Research on the Renewal and Utilization of Space under the Bridge in Chongqing Based on Spatial Potential Evaluation
Authors: Xvelian Qin
Abstract:
Urban "organic renewal" based on the development of existing resources in high-density urban areas has become the mainstream of urban development in the new era. As an important stock resource of public space in high-density urban areas, promoting its value remodeling is an effective way to alleviate the shortage of public space resources. However, due to the lack of evaluation links in the process of underpass space renewal, a large number of underpass space resources have been left idle, facing the problems of low space conversion efficiency, lack of accuracy in development decision-making, and low adaptability of functional positioning to citizens' needs. Therefore, it is of great practical significance to construct the evaluation system of under-bridge space renewal potential and explore the renewal mode. In this paper, some of the under-bridge spaces in the main urban area of Chongqing are selected as the research object. Through the questionnaire interviews with the users of the built excellent space under the bridge, three types of six levels and twenty-two potential evaluation indexes of "objective demand factor, construction feasibility factor and construction suitability factor" are selected, including six levels of land resources, infrastructure, accessibility, safety, space quality and ecological environment. The analytical hierarchy process and expert scoring method are used to determine the index weight, construct the potential evaluation system of the space under the bridge in high-density urban areas of Chongqing, and explore the direction of renewal and utilization of its suitability. To provide feasible theoretical basis and scientific decision support for the use of under bridge space in the future.Keywords: high density urban area, potential evaluation, space under bridge, updated using
Procedia PDF Downloads 709980 Modeling the Effects of Temperature on Ambient Air Quality Using AERMOD
Authors: Mustapha Babatunde, Bassam Tawabini, Ole John Nielson
Abstract:
Air dispersion (AD) models such as AERMOD are important tools for estimating the environmental impacts of air pollutant emissions into the atmosphere from anthropogenic sources. The outcome of these models is significantly linked to the climate condition like air temperature, which is expected to differ in the future due to the global warming phenomenon. With projections from scientific sources of impending changes to the future climate of Saudi Arabia, especially anticipated temperature rise, there is a potential direct impact on the dispersion patterns of air pollutants results from AD models. To our knowledge, no similar studies were carried out in Saudi Arabia to investigate such impact. Therefore, this research investigates the effects of climate temperature change on air quality in the Dammam Metropolitan area, Saudi Arabia, using AERMOD coupled with Station data using Sulphur dioxide (SO₂) – as a model air pollutant. The research uses AERMOD model to predict the SO₂ dispersion trends in the surrounding area. Emissions from five (5) industrial stacks on twenty-eight (28) receptors in the study area were considered for the climate period (2010-2019) and future period of mid-century (2040-2060) under different scenarios of elevated temperature profiles (+1ᵒC, + 3ᵒC and + 5ᵒC) across averaging time periods of 1hr, 4hr and 8hr. Results showed that levels of SO₂ at the receiving sites under current and simulated future climactic condition fall within the allowable limit of WHO and KSA air quality standards. Results also revealed that the projected rise in temperature would only have mild increment on the SO₂ concentration levels. The average increase of SO₂ levels was 0.04%, 0.14%, and 0.23% due to the temperature increase of 1, 3, and 5 degrees, respectively. In conclusion, the outcome of this work elucidates the degree of the effects of global warming and climate changes phenomena on air quality and can help the policymakers in their decision-making, given the significant health challenges associated with ambient air pollution in Saudi Arabia.Keywords: air quality, sulfur dioxide, dispersion models, global warming, KSA
Procedia PDF Downloads 829979 Leveraging SHAP Values for Effective Feature Selection in Peptide Identification
Authors: Sharon Li, Zhonghang Xia
Abstract:
Post-database search is an essential phase in peptide identification using tandem mass spectrometry (MS/MS) to refine peptide-spectrum matches (PSMs) produced by database search engines. These engines frequently face difficulty differentiating between correct and incorrect peptide assignments. Despite advances in statistical and machine learning methods aimed at improving the accuracy of peptide identification, challenges remain in selecting critical features for these models. In this study, two machine learning models—a random forest tree and a support vector machine—were applied to three datasets to enhance PSMs. SHAP values were utilized to determine the significance of each feature within the models. The experimental results indicate that the random forest model consistently outperformed the SVM across all datasets. Further analysis of SHAP values revealed that the importance of features varies depending on the dataset, indicating that a feature's role in model predictions can differ significantly. This variability in feature selection can lead to substantial differences in model performance, with false discovery rate (FDR) differences exceeding 50% between different feature combinations. Through SHAP value analysis, the most effective feature combinations were identified, significantly enhancing model performance.Keywords: peptide identification, SHAP value, feature selection, random forest tree, support vector machine
Procedia PDF Downloads 239978 Interaction of Phytochemicals Present in Green Tea, Honey and Cinnamon to Human Melanocortin 4 Receptor
Authors: Chinmayee Choudhury
Abstract:
Human Melanocortin 4 Receptor (HMC4R) is one of the most potential drug targets for the treatment of obesity which controls the appetite. A deletion of the residues 88-92 in HMC4R is sometimes the cause of severe obesity in the humans. In this study, two homology models are constructed for the normal as well as mutated HMC4Rs and some phytochemicals present in Green Tea, Honey and Cinnamon have been docked to them to study their differential binding to the normal and mutated HMC4R as compared to the natural agonist α- MSH. Two homology models have been constructed for the normal as well as mutated HMC4Rs using the Modeller9v7. Some of the phytochemicals present in Green Tea, Honey, and Cinnamon, which have appetite suppressant activities are constructed, minimized and docked to these normal and mutated HMC4R models using ArgusLab 4.0.1. The mode of binding of the phytochemicals with the Normal and Mutated HMC4Rs have been compared. Further, the mode of binding of these phytochemicals with that of the natural agonist α- Melanocyte Stimulating Hormone(α-MSH) to both normal and mutated HMC4Rs have also been studied. It is observed that the phytochemicals Kaempherol, Epigallocatechin-3-gallate (EGCG) present in Green Tea and Honey, Isorhamnetin, Chlorogenic acid, Chrysin, Galangin, Pinocambrin present in Honey, Cinnamaldehyde, Cinnamyl acetate and Cinnamyl alcohol present in Cinnamon have capacity to form more stable complexes with the Mutated HMC4R as compared to α- MSH. So they may be potential agonists of HMC4R to suppress the appetite.Keywords: HMC4R, α-MSH, docking, photochemical, appetite suppressant, homology modelling
Procedia PDF Downloads 1959977 A System Dynamics Approach to Technological Learning Impact for Cost Estimation of Solar Photovoltaics
Authors: Rong Wang, Sandra Hasanefendic, Elizabeth von Hauff, Bart Bossink
Abstract:
Technological learning and learning curve models have been continuously used to estimate the photovoltaics (PV) cost development over time for the climate mitigation targets. They can integrate a number of technological learning sources which influence the learning process. Yet the accuracy and realistic predictions for cost estimations of PV development are still difficult to achieve. This paper develops four hypothetical-alternative learning curve models by proposing different combinations of technological learning sources, including both local and global technology experience and the knowledge stock. This paper specifically focuses on the non-linear relationship between the costs and technological learning source and their dynamic interaction and uses the system dynamics approach to predict a more accurate PV cost estimation for future development. As the case study, the data from China is gathered and drawn to illustrate that the learning curve model that incorporates both the global and local experience is more accurate and realistic than the other three models for PV cost estimation. Further, absorbing and integrating the global experience into the local industry has a positive impact on PV cost reduction. Although the learning curve model incorporating knowledge stock is not realistic for current PV cost deployment in China, it still plays an effective positive role in future PV cost reduction.Keywords: photovoltaic, system dynamics, technological learning, learning curve
Procedia PDF Downloads 969976 Level of Understanding of the Catholic Doctrines in Relation to the Way of Life of Ignatian Graduates
Authors: Maria Wendy Mendoza-Solomo
Abstract:
The study assessed the level of understanding of catholic doctrines in relation to the way of life of Ignatian graduates of Ateneo de Naga University (ADNU). It was conducted to find out if ADNU is successful in leading their students to a deeper moral understanding of the world centered on Jesus Christ through their curriculum, academic programs, activities and practices. This study further evaluated if their graduates live out their Catholic commitment to Christ in their current way of life. It also determined the factors that affected their level of understanding of Catholic doctrines and their current way of life. The descriptive, qualitative, evaluative and correlational analyses determined the level of understanding of the Catholic doctrines and the current way of life of 390 graduates. It also correlated the level of understanding to moral life and worship. The factors that affected the graduates’ level of understanding and their current way of life were measured. A researcher-made instrument was distributed to the respondents either using the traditional way or the online survey to reach out graduates across the globe. Major findings were (1) The weighted mean of graduates’ level of understanding of Catholic doctrines was 4.63. (2) Along moral life, 4.07 while along worship, 3.83. (3) The Catholic doctrines and moral life had Pearson r value of 0.79. The doctrines and worship, 0.87; and worship and moral life, 0.89. (4) The understanding of the doctrines was affected highly by the teacher factor with 4.09 mean. The moral life and worship were affected highly by the teacher and technological factors both ranked 1.5 (4.04). (5) Along Catholic doctrines, the teacher factor had 0.90 r value; and environmental, -0.40. Along moral life, teacher had r value of -0.30; technological (-0.92), socio-economic (-0.93), political (-0.83), and environmental (-0.90). Along worship, the teacher had 0.36 Pearson r value, technological and socio-economic (-0.78), political (-0.73) and environmental (-0.72). Major conclusions were: (1) Graduates had very high level of understanding of the Catholic doctrines as summarized in the Creed which is grounded in the Sacred Scriptures. (2) They live out this Catholic commitment to Christ by obeying the Commandments very extensively but needed more participation in religious and parish activities. They have overwhelming spirituality and religiosity in terms of receiving of sacraments and sacramental practices except reading the Bible and reflecting on its passages. (3) The graduates’ level of understanding of the Catholic doctrines had very strong correlation with their current way of life. (4) Teacher, socio-economic, technological, environmental, and political factors significantly affected their understanding of the Catholic doctrines and their current way of life. (5) The teacher factor had very strong relationship with the doctrines; technological and political, weak; environmental, moderate; and socio-economic, very weak relationship. The teacher factor had weak relationship but the other factors had very strong relationship with moral life and strong relationship with worship.Keywords: Catholic doctrines, Ignatian graduates, relationship, way of life
Procedia PDF Downloads 3569975 A Regional Analysis on Co-movement of Sovereign Credit Risk and Interbank Risks
Authors: Mehdi Janbaz
Abstract:
The global financial crisis and the credit crunch that followed magnified the importance of credit risk management and its crucial role in the stability of all financial sectors and the whole of the system. Many believe that risks faced by the sovereign sector are highly interconnected with banking risks and most likely to trigger and reinforce each other. This study aims to examine (1) the impact of banking and interbank risk factors on the sovereign credit risk of Eurozone, and (2) how the EU Credit Default Swaps spreads dynamics are affected by the Crude Oil price fluctuations. The hypothesizes are tested by employing fitting risk measures and through a four-staged linear modeling approach. The sovereign senior 5-year Credit Default Swap spreads are used as a core measure of the credit risk. The monthly time-series data of the variables used in the study are gathered from the DataStream database for a period of 2008-2019. First, a linear model test the impact of regional macroeconomic and market-based factors (STOXX, VSTOXX, Oil, Sovereign Debt, and Slope) on the CDS spreads dynamics. Second, the bank-specific factors, including LIBOR-OIS spread (the difference between the Euro 3-month LIBOR rate and Euro 3-month overnight index swap rates) and Euribor, are added to the most significant factors of the previous model. Third, the global financial factors including EURO to USD Foreign Exchange Volatility, TED spread (the difference between 3-month T-bill and the 3-month LIBOR rate based in US dollars), and Chicago Board Options Exchange (CBOE) Crude Oil Volatility Index are added to the major significant factors of the first two models. Finally, a model is generated by a combination of the major factor of each variable set in addition to the crisis dummy. The findings show that (1) the explanatory power of LIBOR-OIS on the sovereign CDS spread of Eurozone is very significant, and (2) there is a meaningful adverse co-movement between the Crude Oil price and CDS price of Eurozone. Surprisingly, adding TED spread (the difference between the three-month Treasury bill and the three-month LIBOR based in US dollars.) to the analysis and beside the LIBOR-OIS spread (the difference between the Euro 3M LIBOR and Euro 3M OIS) in third and fourth models has been increased the predicting power of LIBOR-OIS. Based on the results, LIBOR-OIS, Stoxx, TED spread, Slope, Oil price, OVX, FX volatility, and Euribor are the determinants of CDS spreads dynamics in Eurozone. Moreover, the positive impact of the crisis period on the creditworthiness of the Eurozone is meaningful.Keywords: CDS, crude oil, interbank risk, LIBOR-OIS, OVX, sovereign credit risk, TED
Procedia PDF Downloads 1449974 Mathematical Programming Models for Portfolio Optimization Problem: A Review
Authors: Mazura Mokhtar, Adibah Shuib, Daud Mohamad
Abstract:
Portfolio optimization problem has received a lot of attention from both researchers and practitioners over the last six decades. This paper provides an overview of the current state of research in portfolio optimization with the support of mathematical programming techniques. On top of that, this paper also surveys the solution algorithms for solving portfolio optimization models classifying them according to their nature in heuristic and exact methods. To serve these purposes, 40 related articles appearing in the international journal from 2003 to 2013 have been gathered and analyzed. Based on the literature review, it has been observed that stochastic programming and goal programming constitute the highest number of mathematical programming techniques employed to tackle the portfolio optimization problem. It is hoped that the paper can meet the needs of researchers and practitioners for easy references of portfolio optimization.Keywords: portfolio optimization, mathematical programming, multi-objective programming, solution approaches
Procedia PDF Downloads 3499973 Power Reduction of Hall-Effect Sensor by Pulse Width Modulation of Spinning-Current
Authors: Hyungil Chae
Abstract:
This work presents a method to reduce spinning current of a Hall-effect sensor for low-power magnetic sensor applications. Spinning current of a Hall-effect sensor changes the direction of bias current periodically and can separate signals from DC-offset. The bias current is proportional to the sensor sensitivity but also increases the power consumption. To achieve both high sensitivity and low power consumption, the bias current can be pulse-width modulated. When the bias current duration Tb is reduced by a factor of N compared to the spinning current period of Tₛ/2, the total power consumption can be saved by N times. N can be large as long as the Hall-effect sensor settles down within Tb. The proposed scheme is implemented and simulated in a 0.18um CMOS process, and the power saving factor is 9.6 when N is 10. Acknowledgements: This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (20160001360022003, Development of Hall Semi-conductor for Smart Car and Device).Keywords: chopper stabilization, Hall-effect sensor, pulse width modulation, spinning current
Procedia PDF Downloads 4849972 A Statistical Analysis on Relationship between Temperature Variations with Latitude and Altitude regarding Total Amount of Atmospheric Carbon Dioxide in Iran
Authors: Masoumeh Moghbel
Abstract:
Nowadays, carbon dioxide which is produced by human activities is considered as the main effective factor in the global warming occurrence. Regarding to the role of CO2 and its ability in trapping the heat, the main objective of this research is study the effect of atmospheric CO2 (which is recorded in Manaloa) on variations of temperature parameters (daily mean temperature, minimum temperature and maximum temperature) in 5 meteorological stations in Iran which were selected according to the latitude and altitude in 40 years statistical period. Firstly, the trend of temperature parameters was studied by Regression and none-graphical Man-Kendal methods. Then, relation between temperature variations and CO2 were studied by Correlation technique. Also, the impact of CO2 amount on temperature in different atmospheric levels (850 and 500 hpa) was analyzed. The results illustrated that correlation coefficient between temperature variations and CO2 in low latitudes and high altitudes is more significant rather than other regions. it is important to note that altitude as the one of the main geographic factor has limitation in affecting the temperature variations, so that correlation coefficient between these two parameters in 850 hpa (r=0.86) is more significant than 500 hpa (r = 0.62).Keywords: altitude, atmospheric carbon dioxide, latitude, temperature variations
Procedia PDF Downloads 4089971 Reliability Evaluation of a Payment Model in Mobile E-Commerce Using Colored Petri Net
Authors: Abdolghader Pourali, Mohammad V. Malakooti, Muhammad Hussein Yektaie
Abstract:
A mobile payment system in mobile e-commerce generally have high security so that the user can trust it for doing business deals, sales, paying financial transactions, etc. in the mobile payment system. Since an architecture or payment model in e-commerce only shows the way of interaction and collaboration among users and mortgagers and does not present any evaluation of effectiveness and confidence about financial transactions to stakeholders. In this paper, we try to present a detailed assessment of the reliability of a mobile payment model in the mobile e-commerce using formal models and colored Petri nets. Finally, we demonstrate that the reliability of this system has high value (case study: a secure payment model in mobile commerce.Keywords: reliability, colored Petri net, assessment, payment models, m-commerce
Procedia PDF Downloads 5379970 Applying Arima Data Mining Techniques to ERP to Generate Sales Demand Forecasting: A Case Study
Authors: Ghaleb Y. Abbasi, Israa Abu Rumman
Abstract:
This paper modeled sales history archived from 2012 to 2015 bulked in monthly bins for five products for a medical supply company in Jordan. The sales forecasts and extracted consistent patterns in the sales demand history from the Enterprise Resource Planning (ERP) system were used to predict future forecasting and generate sales demand forecasting using time series analysis statistical technique called Auto Regressive Integrated Moving Average (ARIMA). This was used to model and estimate realistic sales demand patterns and predict future forecasting to decide the best models for five products. Analysis revealed that the current replenishment system indicated inventory overstocking.Keywords: ARIMA models, sales demand forecasting, time series, R code
Procedia PDF Downloads 3859969 Optimizing Performance of Tablet's Direct Compression Process Using Fuzzy Goal Programming
Authors: Abbas Al-Refaie
Abstract:
This paper aims at improving the performance of the tableting process using statistical quality control and fuzzy goal programming. The tableting process was studied. Statistical control tools were used to characterize the existing process for three critical responses including the averages of a tablet’s weight, hardness, and thickness. At initial process factor settings, the estimated process capability index values for the tablet’s averages of weight, hardness, and thickness were 0.58, 3.36, and 0.88, respectively. The L9 array was utilized to provide experimentation design. Fuzzy goal programming was then employed to find the combination of optimal factor settings. Optimization results showed that the process capability index values for a tablet’s averages of weight, hardness, and thickness were improved to 1.03, 4.42, and 1.42, respectively. Such improvements resulted in significant savings in quality and production costs.Keywords: fuzzy goal programming, control charts, process capability, tablet optimization
Procedia PDF Downloads 2709968 Little RAGNER: Toward Lightweight, Generative, Named Entity Recognition through Prompt Engineering, and Multi-Level Retrieval Augmented Generation
Authors: Sean W. T. Bayly, Daniel Glover, Don Horrell, Simon Horrocks, Barnes Callum, Stuart Gibson, Mac Misuira
Abstract:
We assess suitability of recent, ∼7B parameter, instruction-tuned Language Models for Generative Named Entity Recognition (GNER). Alongside Retrieval Augmented Generation (RAG), and supported by task-specific prompting, our proposed Multi-Level Information Retrieval method achieves notable improvements over finetuned entity-level and sentence-level methods. We conclude that language models directed toward this task are highly capable when distinguishing between positive classes (precision). However, smaller models seem to struggle to find all entities (recall). Poorly defined classes such as ”Miscellaneous” exhibit substantial declines in performance, likely due to the ambiguity it introduces to the prompt. This is partially resolved through a self-verification method using engineered prompts containing knowledge of the stricter class definitions, particularly in areas where their boundaries are in danger of overlapping, such as the conflation between the location ”Britain” and the nationality ”British”. Finally, we explore correlations between model performance on the GNER task with performance on relevant academic benchmarks.Keywords: generative named entity recognition, information retrieval, lightweight artificial intelligence, prompt engineering, personal information identification, retrieval augmented generation, self verification
Procedia PDF Downloads 499967 Extent to Which Various Academic Factors Cause Stress in Undergraduate Students at a University in Karachi and What Unhealthy Coping Strategies They Use
Authors: Sumara Khanzada
Abstract:
This research investigated how much stress is induced by various study-related factors, in undergraduate students belonging to a renowned university in Karachi along with the unhealthy coping strategy the students use to manage the stress. The study related factors considered for the purpose of the study were curriculum and instruction based stress, teacher-student relationship, assessment system and different components related to academic work. A survey in which questionnaires were administered to hundred students was conducted. The data were analyzed quantitatively to determine the percentages of stress induced by the various factors. The study found that student-teacher relationship is the strongest factor that causes stress in the undergraduate students specifically when teachers do not deliver the lectures effectively and give assignments and presentations to students without clear guidelines and instructions. The second important factor that causes stress was the different components of academic life, such as, parental expectations and pressures to achieve one's goals. Assessment system was found to be the third key factor inducing stress and affecting students' cognitive and psychological functioning. The most commonly used unhealthy coping strategy for stress management was procrastination. In light of the findings, it is recommended that importance be given to teacher training to ensure that instruction is proper and healthy teacher student relationship exists. Effective support programs, workshops, seminars, and different awareness programs should be arranged for promoting awareness regarding mental health in educational institutions. Moreover, additional zero credit courses should be offered to teach students how to learn stress management and healthy coping skills. Sumara Khanzada Clinical Psychologist [email protected]Keywords: Stress, coping stretigies, acadamic stress, relationship
Procedia PDF Downloads 849966 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering
Authors: R. Nandhini, Gaurab Mudbhari
Abstract:
Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.Keywords: machine learning, deep learning, image classification, image clustering
Procedia PDF Downloads 129965 The Use of Drones in Measuring Environmental Impacts of the Forest Garden Approach
Authors: Andrew J. Zacharias
Abstract:
The forest garden approach (FGA) was established by Trees for the Future (TREES) over the organization’s 30 years of agroforestry projects in Sub-Saharan Africa. This method transforms traditional agricultural systems into highly managed gardens that produce food and marketable products year-round. The effects of the FGA on food security, dietary diversity, and economic resilience have been measured closely, and TREES has begun to closely monitor the environmental impacts through the use of sensors mounted on unmanned aerial vehicles, commonly known as 'drones'. These drones collect thousands of pictures to create 3-D models in both the visible and the near-infrared wavelengths. Analysis of these models provides TREES with quantitative and qualitative evidence of improvements to the annual above-ground biomass and leaf area indices, as measured in-situ using NDVI calculations.Keywords: agroforestry, biomass, drones, NDVI
Procedia PDF Downloads 1579964 An Adjusted Network Information Criterion for Model Selection in Statistical Neural Network Models
Authors: Christopher Godwin Udomboso, Angela Unna Chukwu, Isaac Kwame Dontwi
Abstract:
In selecting a Statistical Neural Network model, the Network Information Criterion (NIC) has been observed to be sample biased, because it does not account for sample sizes. The selection of a model from a set of fitted candidate models requires objective data-driven criteria. In this paper, we derived and investigated the Adjusted Network Information Criterion (ANIC), based on Kullback’s symmetric divergence, which has been designed to be an asymptotically unbiased estimator of the expected Kullback-Leibler information of a fitted model. The analyses show that on a general note, the ANIC improves model selection in more sample sizes than does the NIC.Keywords: statistical neural network, network information criterion, adjusted network, information criterion, transfer function
Procedia PDF Downloads 5679963 Experimental Parameters’ Effects on the Electrical Discharge Machining Performances
Authors: Asmae Tafraouti, Yasmina Layouni, Pascal Kleimann
Abstract:
The growing market for Microsystems (MST) and Micro-Electromechanical Systems (MEMS) is driving the research for alternative manufacturing techniques to microelectronics-based technologies, which are generally expensive and time-consuming. Hot-embossing and micro-injection modeling of thermoplastics appear to be industrially viable processes. However, both require the use of master models, usually made in hard materials such as steel. These master models cannot be fabricated using standard microelectronics processes. Thus, other micromachining processes are used, such as laser machining or micro-electrical discharge machining (µEDM). In this work, µEDM has been used. The principle of µEDM is based on the use of a thin cylindrical micro-tool that erodes the workpiece surface. The two electrodes are immersed in a dielectric with a distance of a few micrometers (gap). When an electrical voltage is applied between the two electrodes, electrical discharges are generated, which cause material machining. In order to produce master models with high resolution and smooth surfaces, it is necessary to well control the discharge mechanism. However, several problems are encountered, such as a random electrical discharge process, the fluctuation of the discharge energy, the electrodes' polarity inversion, and the wear of the micro-tool. The effect of different parameters, such as the applied voltage, the working capacitor, the micro-tool diameter, and the initial gap, has been studied. This analysis helps to improve the machining performances, such as the workpiece surface condition and the lateral crater's gap.Keywords: craters, electrical discharges, micro-electrical discharge machining, microsystems
Procedia PDF Downloads 749962 Electrical Load Estimation Using Estimated Fuzzy Linear Parameters
Authors: Bader Alkandari, Jamal Y. Madouh, Ahmad M. Alkandari, Anwar A. Alnaqi
Abstract:
A new formulation of fuzzy linear estimation problem is presented. It is formulated as a linear programming problem. The objective is to minimize the spread of the data points, taking into consideration the type of the membership function of the fuzzy parameters to satisfy the constraints on each measurement point and to insure that the original membership is included in the estimated membership. Different models are developed for a fuzzy triangular membership. The proposed models are applied to different examples from the area of fuzzy linear regression and finally to different examples for estimating the electrical load on a busbar. It had been found that the proposed technique is more suited for electrical load estimation, since the nature of the load is characterized by the uncertainty and vagueness.Keywords: fuzzy regression, load estimation, fuzzy linear parameters, electrical load estimation
Procedia PDF Downloads 5409961 Finite Element-Based Stability Analysis of Roadside Settlements Slopes from Barpak to Yamagaun through Laprak Village of Gorkha, an Epicentral Location after the 7.8Mw 2015 Barpak, Gorkha, Nepal Earthquake
Authors: N. P. Bhandary, R. C. Tiwari, R. Yatabe
Abstract:
The research employs finite element method to evaluate the stability of roadside settlements slopes from Barpak to Yamagaon through Laprak village of Gorkha, Nepal after the 7.8Mw 2015 Barpak, Gorkha, Nepal earthquake. It includes three major villages of Gorkha, i.e., Barpak, Laprak and Yamagaun that were devastated by 2015 Gorkhas’ earthquake. The road head distance from the Barpak to Laprak and Laprak to Yamagaun are about 14 and 29km respectively. The epicentral distance of main shock of magnitude 7.8 and aftershock of magnitude 6.6 were respectively 7 and 11 kilometers (South-East) far from the Barpak village nearer to Laprak and Yamagaon. It is also believed that the epicenter of the main shock as said until now was not in the Barpak village, it was somewhere near to the Yamagaun village. The chaos that they had experienced during the earthquake in the Yamagaun was much more higher than the Barpak. In this context, we have carried out a detailed study to investigate the stability of Yamagaun settlements slope as a case study, where ground fissures, ground settlement, multiple cracks and toe failures are the most severe. In this regard, the stability issues of existing settlements and proposed road alignment, on the Yamagaon village slope are addressed, which is surrounded by many newly activated landslides. Looking at the importance of this issue, field survey is carried out to understand the behavior of ground fissures and multiple failure characteristics of the slopes. The results suggest that the Yamgaun slope in Profile 2-2, 3-3 and 4-4 are not safe enough for infrastructure development even in the normal soil slope conditions as per 2, 3 and 4 material models; however, the slope seems quite safe for at Profile 1-1 for all 4 material models. The result also indicates that the first three profiles are marginally safe for 2, 3 and 4 material models respectively. The Profile 4-4 is not safe enough for all 4 material models. Thus, Profile 4-4 needs a special care to make the slope stable.Keywords: earthquake, finite element method, landslide, stability
Procedia PDF Downloads 3489960 An Exploration of Renewal Utilization of Under-bridge Space Based on Spatial Potential Evaluation - Taking Chongqing Municipality as an Example
Authors: Xuelian Qin
Abstract:
Urban "organic renewal" based on the development of existing resources in high-density urban areas has become the mainstream of urban development in the new era. As an important stock resource of public space in high-density urban areas, promoting its value remodeling is an effective way to alleviate the shortage of public space resources. However, due to the lack of evaluation links in the process of underpass space renewal, a large number of underpass space resources have been left idle, facing the problems of low space conversion efficiency, lack of accuracy in development decision-making, and low adaptability of functional positioning to citizens' needs. Therefore, it is of great practical significance to construct the evaluation system of under-bridge space renewal potential and explore the renewal mode. In this paper, some of the under-bridge spaces in the main urban area of Chongqing are selected as the research object. Through the questionnaire interviews with the users of the built excellent space under the bridge, three types of six levels and twenty-two potential evaluation indexes of "objective demand factor, construction feasibility factor and construction suitability factor" are selected, including six levels of land resources, infrastructure, accessibility, safety, space quality and ecological environment. The analytical hierarchy process and expert scoring method are used to determine the index weight, construct the potential evaluation system of the space under the bridge in high-density urban areas of Chongqing, and explore the direction of renewal and utilization of its suitability. To provide feasible theoretical basis and scientific decision support for the use of under bridge space in the future.Keywords: high density urban area, potential evaluation, space under bridge, updated using
Procedia PDF Downloads 959959 Exergy Based Analysis of Parabolic Trough Collector Using Twisted-Tape Inserts
Authors: Atwari Rawani, Suresh Prasad Sharma, K. D. P. Singh
Abstract:
In this paper, an analytical investigation based on energy and exergy analysis of the parabolic trough collector (PTC) with alternate clockwise and counter-clockwise twisted tape inserts in the absorber tube has been presented. For fully developed flow under quasi-steady state conditions, energy equations have been developed in order to analyze the rise in fluid temperature, thermal efficiency, entropy generation and exergy efficiency. Also the effect of system and operating parameters on performance have been studied. A computer program, based on mathematical models is developed in C++ language to estimate the temperature rise of fluid for evaluation of performances under specified conditions. For numerical simulations four different twist ratio, x = 2,3,4,5 and mass flow rate 0.06 kg/s to 0.16 kg/s which cover the Reynolds number range of 3000 - 9000 is considered. This study shows that twisted tape inserts when used shows great promise for enhancing the performance of PTC. Results show that for x=1, Nusselt number/heat transfer coefficient is found to be 3.528 and 3.008 times over plain absorber of PTC at mass flow rate of 0.06 kg/s and 0.16 kg/s respectively; while corresponding enhancement in thermal efficiency is 12.57% and 5.065% respectively. Also the exergy efficiency has been found to be 10.61% and 10.97% and enhancement factor is 1.135 and 1.048 for same set of conditions.Keywords: exergy efficiency, twisted tape ratio, turbulent flow, useful heat gain
Procedia PDF Downloads 1739958 A Bathtub Curve from Nonparametric Model
Authors: Eduardo C. Guardia, Jose W. M. Lima, Afonso H. M. Santos
Abstract:
This paper presents a nonparametric method to obtain the hazard rate “Bathtub curve” for power system components. The model is a mixture of the three known phases of a component life, the decreasing failure rate (DFR), the constant failure rate (CFR) and the increasing failure rate (IFR) represented by three parametric Weibull models. The parameters are obtained from a simultaneous fitting process of the model to the Kernel nonparametric hazard rate curve. From the Weibull parameters and failure rate curves the useful lifetime and the characteristic lifetime were defined. To demonstrate the model the historic time-to-failure of distribution transformers were used as an example. The resulted “Bathtub curve” shows the failure rate for the equipment lifetime which can be applied in economic and replacement decision models.Keywords: bathtub curve, failure analysis, lifetime estimation, parameter estimation, Weibull distribution
Procedia PDF Downloads 4469957 Epigenetic Drugs for Major Depressive Disorder: A Critical Appraisal of Available Studies
Authors: Aniket Kumar, Jacob Peedicayil
Abstract:
Major depressive disorder (MDD) is a common and important psychiatric disorder. Several clinical features of MDD suggest an epigenetic basis for its pathogenesis. Since epigenetics (heritable changes in gene expression not involving changes in DNA sequence) may underlie the pathogenesis of MDD, epigenetic drugs such as DNA methyltransferase inhibitors (DNMTi) and histone deactylase inhibitors (HDACi) may be useful for treating MDD. The available literature indexed in Pubmed on preclinical drug trials of epigenetic drugs for the treatment of MDD was investigated. The search terms we used were ‘depression’ or ‘depressive’ and ‘HDACi’ or ‘DNMTi’. Among epigenetic drugs, it was found that there were 3 preclinical trials using HDACi and 3 using DNMTi for the treatment of MDD. All the trials were conducted on rodents (mice or rats). The animal models of depression that were used were: learned helplessness-induced animal model, forced swim test, open field test, and the tail suspension test. One study used a genetic rat model of depression (the Flinders Sensitive Line). The HDACi that were tested were: sodium butyrate, compound 60 (Cpd-60), and valproic acid. The DNMTi that were tested were: 5-azacytidine and decitabine. Among the three preclinical trials using HDACi, all showed an antidepressant effect in animal models of depression. Among the 3 preclinical trials using DNMTi also, all showed an antidepressant effect in animal models of depression. Thus, epigenetic drugs, namely, HDACi and DNMTi, may prove to be useful in the treatment of MDD and merit further investigation for the treatment of this disorder.Keywords: DNA methylation, drug discovery, epigenetics, major depressive disorder
Procedia PDF Downloads 1889956 A Biomechanical Model for the Idiopathic Scoliosis Using the Antalgic-Trak Technology
Authors: Joao Fialho
Abstract:
The mathematical modelling of idiopathic scoliosis has been studied throughout the years. The models presented on those papers are based on the orthotic stabilization of the idiopathic scoliosis, which are based on a transversal force being applied to the human spine on a continuous form. When considering the ATT (Antalgic-Trak Technology) device, the existent models cannot be used, as the type of forces applied are no longer transversal nor applied in a continuous manner. In this device, vertical traction is applied. In this study we propose to model the idiopathic scoliosis, using the ATT (Antalgic-Trak Technology) device, and with the parameters obtained from the mathematical modeling, set up a case-by-case individualized therapy plan, for each patient.Keywords: idiopathic scoliosis, mathematical modelling, human spine, Antalgic-Trak technology
Procedia PDF Downloads 2699955 On the Use of Analytical Performance Models to Design a High-Performance Active Queue Management Scheme
Authors: Shahram Jamali, Samira Hamed
Abstract:
One of the open issues in Random Early Detection (RED) algorithm is how to set its parameters to reach high performance for the dynamic conditions of the network. Although original RED uses fixed values for its parameters, this paper follows a model-based approach to upgrade performance of the RED algorithm. It models the routers queue behavior by using the Markov model and uses this model to predict future conditions of the queue. This prediction helps the proposed algorithm to make some tunings over RED's parameters and provide efficiency and better performance. Widespread packet level simulations confirm that the proposed algorithm, called Markov-RED, outperforms RED and FARED in terms of queue stability, bottleneck utilization and dropped packets count.Keywords: active queue management, RED, Markov model, random early detection algorithm
Procedia PDF Downloads 5399954 Ontology Expansion via Synthetic Dataset Generation and Transformer-Based Concept Extraction
Authors: Andrey Khalov
Abstract:
The rapid proliferation of unstructured data in IT infrastructure management demands innovative approaches for extracting actionable knowledge. This paper presents a framework for ontology-based knowledge extraction that combines relational graph neural networks (R-GNN) with large language models (LLMs). The proposed method leverages the DOLCE framework as the foundational ontology, extending it with concepts from ITSMO for domain-specific applications in IT service management and outsourcing. A key component of this research is the use of transformer-based models, such as DeBERTa-v3-large, for automatic entity and relationship extraction from unstructured texts. Furthermore, the paper explores how transfer learning techniques can be applied to fine-tune large language models (LLaMA) for using to generate synthetic datasets to improve precision in BERT-based entity recognition and ontology alignment. The resulting IT Ontology (ITO) serves as a comprehensive knowledge base that integrates domain-specific insights from ITIL processes, enabling more efficient decision-making. Experimental results demonstrate significant improvements in knowledge extraction and relationship mapping, offering a cutting-edge solution for enhancing cognitive computing in IT service environments.Keywords: ontology expansion, synthetic dataset, transformer fine-tuning, concept extraction, DOLCE, BERT, taxonomy, LLM, NER
Procedia PDF Downloads 149953 Using Historical Data for Stock Prediction
Authors: Sofia Stoica
Abstract:
In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.Keywords: finance, machine learning, opening price, stock market
Procedia PDF Downloads 1909952 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea
Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park
Abstract:
Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques
Procedia PDF Downloads 168