Search results for: Bayesian multilevel logit models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7137

Search results for: Bayesian multilevel logit models

5907 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory

Authors: Damir Latypov

Abstract:

A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.

Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory

Procedia PDF Downloads 152
5906 Understanding the Role of Social Entrepreneurship in Building Mobility of a Service Transportation Models

Authors: Liam Fassam, Pouria Liravi, Jacquie Bridgman

Abstract:

Introduction: The way we travel is rapidly changing, car ownership and use are declining among young people and those residents in urban areas. Also, the increasing role and popularity of sharing economy companies like Uber highlight a movement towards consuming transportation solutions as a service [Mobility of a Service]. This research looks to bridge the knowledge gap that exists between city mobility, smart cities, sharing economy and social entrepreneurship business models. Understanding of this subject is crucial for smart city design, as access to affordable transport has been identified as a contributing factor to social isolation leading to issues around health and wellbeing. Methodology: To explore the current fit vis-a-vis transportation business models and social impact this research undertook a comparative analysis between a systematic literature review and a Delphi study. The systematic literature review was undertaken to gain an appreciation of the current academic thinking on ‘social entrepreneurship and smart city mobility’. The second phase of the research initiated a Delphi study across a group of 22 participants to review future opinion on ‘how social entrepreneurship can assist city mobility sharing models?’. The Delphi delivered an initial 220 results, which once cross-checked for duplication resulted in 130. These 130 answers were sent back to participants to score importance against a 5-point LIKERT scale, enabling a top 10 listing of areas for shared user transports in society to be gleaned. One further round (4) identified no change in the coefficient of variant thus no further rounds were required. Findings: Initial results of the literature review returned 1,021 journals using the search criteria ‘social entrepreneurship and smart city mobility’. Filtering allied to ‘peer review’, ‘date’, ‘region’ and ‘Chartered associated of business school’ ranking proffered a resultant journal list of 75. Of these, 58 focused on smart city design, 9 on social enterprise in cityscapes, 6 relating to smart city network design and 3 on social impact, with no journals purporting the need for social entrepreneurship to be allied to city mobility. The future inclusion factors from the Delphi expert panel indicated that smart cities needed to include shared economy models in their strategies. Furthermore, social isolation born by costs of infrastructure needed addressing through holistic A-political social enterprise models, and a better understanding of social benefit measurement is needed. Conclusion: In investigating the collaboration between key public transportation stakeholders, a theoretical model of social enterprise transportation models that positively impact upon the smart city needs of reduced transport poverty and social isolation was formed. As such, the research has identified how a revised business model of Mobility of a Service allied to a social entrepreneurship can deliver impactful measured social benefits associated to smart city design existent research.

Keywords: social enterprise, collaborative transportation, new models of ownership, transport social impact

Procedia PDF Downloads 140
5905 Modeling and Benchmarking the Thermal Energy Performance of Palm Oil Production Plant

Authors: Mathias B. Michael, Esther T. Akinlabi, Tien-Chien Jen

Abstract:

Thermal energy consumption in palm oil production plant comprises mainly of steam, hot water and hot air. In most efficient plants, hot water and air are generated from the steam supply system. Research has shown that thermal energy utilize in palm oil production plants is about 70 percent of the total energy consumption of the plant. In order to manage the plants’ energy efficiently, the energy systems are modelled and optimized. This paper aimed to present the model of steam supply systems of a typical palm oil production plant in Ghana. The models include exergy and energy models of steam boiler, steam turbine and the palm oil mill. The paper further simulates the virtual plant model to obtain the thermal energy performance of the plant under study. The simulation results show that, under normal operating condition, the boiler energy performance is considerably below the expected level as a result of several factors including intermittent biomass fuel supply, significant moisture content of the biomass fuel and significant heat losses. The total thermal energy performance of the virtual plant is set as a baseline. The study finally recommends number of energy efficiency measures to improve the plant’s energy performance.

Keywords: palm biomass, steam supply, exergy and energy models, energy performance benchmark

Procedia PDF Downloads 347
5904 Practical Modelling of RC Structural Walls under Monotonic and Cyclic Loading

Authors: Reza E. Sedgh, Rajesh P. Dhakal

Abstract:

Shear walls have been used extensively as the main lateral force resisting systems in multi-storey buildings. The recent development in performance based design urges practicing engineers to conduct nonlinear static or dynamic analysis to evaluate seismic performance of multi-storey shear wall buildings by employing distinct analytical models suggested in the literature. For practical purpose, application of macroscopic models to simulate the global and local nonlinear behavior of structural walls outweighs the microscopic models. The skill level, computational time and limited access to RC specialized finite element packages prevents the general application of this method in performance based design or assessment of multi-storey shear wall buildings in design offices. Hence, this paper organized to verify capability of nonlinear shell element in commercially available package (Sap2000) in simulating results of some specimens under monotonic and cyclic loads with very oversimplified available cyclic material laws in the analytical tool. The selection of constitutive models, the determination of related parameters of the constituent material and appropriate nonlinear shear model are presented in detail. Adoption of proposed simple model demonstrated that the predicted results follow the overall trend of experimental force-displacement curve. Although, prediction of ultimate strength and the overall shape of hysteresis model agreed to some extent with experiment, the ultimate displacement(significant strength degradation point) prediction remains challenging in some cases.

Keywords: analytical model, nonlinear shell element, structural wall, shear behavior

Procedia PDF Downloads 404
5903 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 78
5902 Introduction to Various Innovative Techniques Suggested for Seismic Hazard Assessment

Authors: Deepshikha Shukla, C. H. Solanki, Mayank K. Desai

Abstract:

Amongst all the natural hazards, earthquakes have the potential for causing the greatest damages. Since the earthquake forces are random in nature and unpredictable, the quantification of the hazards becomes important in order to assess the hazards. The time and place of a future earthquake are both uncertain. Since earthquakes can neither be prevented nor be predicted, engineers have to design and construct in such a way, that the damage to life and property are minimized. Seismic hazard analysis plays an important role in earthquake design structures by providing a rational value of input parameter. In this paper, both mathematical, as well as computational methods adopted by researchers globally in the past five years, will be discussed. Some mathematical approaches involving the concepts of Poisson’s ratio, Convex Set Theory, Empirical Green’s Function, Bayesian probability estimation applied for seismic hazard and FOSM (first-order second-moment) algorithm methods will be discussed. Computational approaches and numerical model SSIFiBo developed in MATLAB to study dynamic soil-structure interaction problem is discussed in this paper. The GIS-based tool will also be discussed which is predominantly used in the assessment of seismic hazards.

Keywords: computational methods, MATLAB, seismic hazard, seismic measurements

Procedia PDF Downloads 339
5901 Machine Learning Analysis of Student Success in Introductory Calculus Based Physics I Course

Authors: Chandra Prayaga, Aaron Wade, Lakshmi Prayaga, Gopi Shankar Mallu

Abstract:

This paper presents the use of machine learning algorithms to predict the success of students in an introductory physics course. Data having 140 rows pertaining to the performance of two batches of students was used. The lack of sufficient data to train robust machine learning models was compensated for by generating synthetic data similar to the real data. CTGAN and CTGAN with Gaussian Copula (Gaussian) were used to generate synthetic data, with the real data as input. To check the similarity between the real data and each synthetic dataset, pair plots were made. The synthetic data was used to train machine learning models using the PyCaret package. For the CTGAN data, the Ada Boost Classifier (ADA) was found to be the ML model with the best fit, whereas the CTGAN with Gaussian Copula yielded Logistic Regression (LR) as the best model. Both models were then tested for accuracy with the real data. ROC-AUC analysis was performed for all the ten classes of the target variable (Grades A, A-, B+, B, B-, C+, C, C-, D, F). The ADA model with CTGAN data showed a mean AUC score of 0.4377, but the LR model with the Gaussian data showed a mean AUC score of 0.6149. ROC-AUC plots were obtained for each Grade value separately. The LR model with Gaussian data showed consistently better AUC scores compared to the ADA model with CTGAN data, except in two cases of the Grade value, C- and A-.

Keywords: machine learning, student success, physics course, grades, synthetic data, CTGAN, gaussian copula CTGAN

Procedia PDF Downloads 43
5900 Vibrations of Springboards: Mode Shape and Time Domain Analysis

Authors: Stefano Frassinelli, Alessandro Niccolai, Riccardo E. Zich

Abstract:

Diving is an important Olympic sport. In this sport, the effective performance of the athlete is related to his capability to interact correctly with the springboard. In fact, the elevation of the jump and the correctness of the dive are influenced by the vibrations of the board. In this paper, the vibrations of the springboard will be analyzed by means of typical tools for vibration analysis: Firstly, a modal analysis will be done on two different models of the springboard, then, these two model and another one will be analyzed with a time analysis, done integrating the equations of motion od deformable bodies. All these analyses will be compared with experimental data measured on a real springboard by means of a 6-axis accelerometer; these measurements are aimed to assess the models proposed. The acquired data will be analyzed both in frequency domain and in time domain.

Keywords: springboard analysis, modal analysis, time domain analysis, vibrations

Procedia PDF Downloads 458
5899 Predicting the Effect of Silicon Electrode Design Parameters on Thermal Performance of a Lithium-Ion Battery

Authors: Harika Dasari, Eric Eisenbraun

Abstract:

The present study models the role of electrode structural characteristics on the thermal behavior of lithium-ion batteries. Preliminary modeling runs have employed a 1D lithium-ion battery coupled to a two-dimensional axisymmetric model using silicon as the battery anode material. The two models are coupled by the heat generated and the average temperature. Our study is focused on the silicon anode particle sizes and it is observed that silicon anodes with nano-sized particles reduced the temperature of the battery in comparison to anodes with larger particles. These results are discussed in the context of the relationship between particle size and thermal transport properties in the electrode.

Keywords: particle size, NMC, silicon, heat generation, separator

Procedia PDF Downloads 285
5898 Stability Analysis of Two-delay Differential Equation for Parkinson's Disease Models with Positive Feedback

Authors: M. A. Sohaly, M. A. Elfouly

Abstract:

Parkinson's disease (PD) is a heterogeneous movement disorder that often appears in the elderly. PD is induced by a loss of dopamine secretion. Some drugs increase the secretion of dopamine. In this paper, we will simply study the stability of PD models as a nonlinear delay differential equation. After a period of taking drugs, these act as positive feedback and increase the tremors of patients, and then, the differential equation has positive coefficients and the system is unstable under these conditions. We will present a set of suggested modifications to make the system more compatible with the biodynamic system. When giving a set of numerical examples, this research paper is concerned with the mathematical analysis, and no clinical data have been used.

Keywords: Parkinson's disease, stability, simulation, two delay differential equation

Procedia PDF Downloads 128
5897 Selection of Variogram Model for Environmental Variables

Authors: Sheikh Samsuzzhan Alam

Abstract:

The present study investigates the selection of variogram model in analyzing spatial variations of environmental variables with the trend. Sometimes, the autofitted theoretical variogram does not really capture the true nature of the empirical semivariogram. So proper exploration and analysis are needed to select the best variogram model. For this study, an open source data collected from California Soil Resource Lab1 is used to explain the problems when fitting a theoretical variogram. Five most commonly used variogram models: Linear, Gaussian, Exponential, Matern, and Spherical were fitted to the experimental semivariogram. Ordinary kriging methods were considered to evaluate the accuracy of the selected variograms through cross-validation. This study is beneficial for selecting an appropriate theoretical variogram model for environmental variables.

Keywords: anisotropy, cross-validation, environmental variables, kriging, variogram models

Procedia PDF Downloads 331
5896 Chemometric Analysis of Raw Milk Quality Originating from Conventional and Organic Dairy Farming in AP Vojvodina, Serbia

Authors: Sanja Podunavac-Kuzmanović, Denis Kučević, Strahinja Kovačević, Milica Karadžić, Lidija Jevrić

Abstract:

The present study describes the application of chemometric methods in analysis of milk samples which were collected in a conventional dairy farm and an organic dairy farm in AP Vojvodina, Republic of Serbia. The chemometric analysis included the application of univariate regression modeling and Analysis of Variance (ANOVA) method. The ANOVA was used in order to determine the differences in fatty acids content in the milk samples from conventional and organic farm. The results of the ANOVA testing indicate that there is a highly statistically significant difference between the content of fatty acid (saturated fatty acid vs. unsaturated fatty acids) in different dairy farming. Besides, the linear univariate models have been obtained as a result of modeling the linear relationships between the milk fat content and saturated fatty acids content, and the linear relationships between the milk fat content and unsaturated fatty acids content. The models obtained on the basis of the milk samples which originate from the organic farming are statistically better than the models based on the milk samples from conventional farming.

Keywords: hemometrics, milk, organic farming, quality control

Procedia PDF Downloads 235
5895 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models

Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales

Abstract:

The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.

Keywords: concrete bridges, deterioration, Markov chains, probability matrix

Procedia PDF Downloads 335
5894 Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation

Authors: Arian Hosseini, Mahmudul Hasan

Abstract:

To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy.

Keywords: deep classification, content moderation, ensemble learning, explosion detection, video processing

Procedia PDF Downloads 52
5893 Future Design and Innovative Economic Models for Futuristic Markets in Developing Countries

Authors: Nessreen Y. Ibrahim

Abstract:

Designing the future according to realistic analytical study for the futuristic market needs can be a milestone strategy to make a huge improvement in developing countries economics. In developing countries, access to high technology and latest science approaches is very limited. The financial problems in low and medium income countries have negative effects on the kind and quality of imported new technologies and application for their markets. Thus, there is a strong need for shifting paradigm thinking in the design process to improve and evolve their development strategy. This paper discusses future possibilities in developing countries, and how they can design their own future according to specific future models FDM (Future Design Models), which established to solve certain economical problems, as well as political and cultural conflicts. FDM is strategic thinking framework provides an improvement in both content and process. The content includes; beliefs, values, mission, purpose, conceptual frameworks, research, and practice, while the process includes; design methodology, design systems, and design managements tools. In this paper the main objective was building an innovative economic model to design a chosen possible futuristic scenario; by understanding the market future needs, analyze real world setting, solve the model questions by future driven design, and finally interpret the results, to discuss to what extent the results can be transferred to the real world. The paper discusses Egypt as a potential case study. Since, Egypt has highly complex economical problems, extra-dynamic political factors, and very rich cultural aspects; we considered Egypt is a very challenging example for applying FDM. The paper results recommended using FDM numerical modeling as a starting point to design the future.

Keywords: developing countries, economic models, future design, possible futures

Procedia PDF Downloads 265
5892 Machine Learning Assisted Performance Optimization in Memory Tiering

Authors: Derssie Mebratu

Abstract:

As a large variety of micro services, web services, social graphic applications, and media applications are continuously developed, it is substantially vital to design and build a reliable, efficient, and faster memory tiering system. Despite limited design, implementation, and deployment in the last few years, several techniques are currently developed to improve a memory tiering system in a cloud. Some of these techniques are to develop an optimal scanning frequency; improve and track pages movement; identify pages that recently accessed; store pages across each tiering, and then identify pages as a hot, warm, and cold so that hot pages can store in the first tiering Dynamic Random Access Memory (DRAM) and warm pages store in the second tiering Compute Express Link(CXL) and cold pages store in the third tiering Non-Volatile Memory (NVM). Apart from the current proposal and implementation, we also develop a new technique based on a machine learning algorithm in that the throughput produced 25% improved performance compared to the performance produced by the baseline as well as the latency produced 95% improved performance compared to the performance produced by the baseline.

Keywords: machine learning, bayesian optimization, memory tiering, CXL, DRAM

Procedia PDF Downloads 94
5891 Forecasting Solid Waste Generation in Turkey

Authors: Yeliz Ekinci, Melis Koyuncu

Abstract:

Successful planning of solid waste management systems requires successful prediction of the amount of solid waste generated in an area. Waste management planning can protect the environment and human health, hence it is tremendously important for countries. The lack of information in waste generation can cause many environmental and health problems. Turkey is a country that plans to join European Union, hence, solid waste management is one of the most significant criteria that should be handled in order to be a part of this community. Solid waste management system requires a good forecast of solid waste generation. Thus, this study aims to forecast solid waste generation in Turkey. Artificial Neural Network and Linear Regression models will be used for this aim. Many models will be run and the best one will be selected based on some predetermined performance measures.

Keywords: forecast, solid waste generation, solid waste management, Turkey

Procedia PDF Downloads 505
5890 CAD Tool for Parametric Design modification of Yacht Hull Surface Models

Authors: Shahroz Khan, Erkan Gunpinar, Kemal Mart

Abstract:

Recently parametric design techniques became a vital concept in the field of Computer Aided Design (CAD), which helps to provide sophisticated platform to the designer in order to automate the design process in efficient time. In these techniques, design process starts by parameterizing the important features of design models (typically the key dimensions), with the implementation of design constraints. The design constraints help to retain the overall shape of the model while modifying its parameters. However, the process of initializing an appropriate number of design parameters and constraints is the crucial part of parametric design techniques, especially for complex surface models such as yacht hull. This paper introduces a method to create complex surface models in favor of parametric design techniques, a method to define the right number of parameters and respective design constraints, and a system to implement design parameters in contract to design constraints schema. For this, in our proposed approach the design process starts by dividing the yacht hull into three sections. Each section consists of different shape lines, which form the overall shape of yacht hull. The shape lines are created using Cubic Bezier Curves, which allow larger design flexibility. Design parameters and constraints are defined on the shape lines in 3D design space to facilitate the designers for better and individual handling of parameters. Afterwards, shape modifiers are developed, which allow the modification of each parameter while satisfying the respective set of criteria and design constraints. Such as, geometric continuities should be maintained between the shape lines of the three sections, fairness of the hull surfaces should be preserved after modification and while design modification, effect of a single parameter should be negligible on other parameters. The constraints are defined individually on shape lines of each section and mutually between the shape lines of two connecting sections. In order to validate and visualize design results of our shape modifiers, a real time graphic interface is created.

Keywords: design parameter, design constraints, shape modifies, yacht hull

Procedia PDF Downloads 299
5889 Modelling of Damage as Hinges in Segmented Tunnels

Authors: Gelacio JuáRez-Luna, Daniel Enrique GonzáLez-RamíRez, Enrique Tenorio-Montero

Abstract:

Frame elements coupled with springs elements are used for modelling the development of hinges in segmented tunnels, the spring elements modelled the rotational, transversal and axial failure. These spring elements are equipped with constitutive models to include independently the moment, shear force and axial force, respectively. These constitutive models are formulated based on damage mechanics and experimental test reported in the literature review. The mesh of the segmented tunnels was discretized in the software GID, and the nonlinear analyses were carried out in the finite element software ANSYS. These analyses provide the capacity curve of the primary and secondary lining of a segmented tunnel. Two numerical examples of segmented tunnels show the capability of the spring elements to release energy by the development of hinges. The first example is a segmental concrete lining discretized with frame elements loaded until hinges occurred in the lining. The second example is a tunnel with primary and secondary lining, discretized with a double ring frame model. The outer ring simulates the segmental concrete lining and the inner ring simulates the secondary cast-in-place concrete lining. Spring elements also modelled the joints between the segments in the circumferential direction and the ring joints, which connect parallel adjacent rings. The computed load vs displacement curves are congruent with numerical and experimental results reported in the literature review. It is shown that the modelling of a tunnel with primary and secondary lining with frame elements and springs provides reasonable results and save computational cost, comparing with 2D or 3D models equipped with smeared crack models.

Keywords: damage, hinges, lining, tunnel

Procedia PDF Downloads 387
5888 The Creation of a Yeast Model for 5-oxoproline Accumulation

Authors: Pratiksha Dubey, Praveen Singh, Shantanu Sen Gupta, Anand K. Bachhawat

Abstract:

5-oxoproline (pyroglutamic acid) is a cyclic lactam of glutamic acid. In the cell, it can be produced by several different pathways and is metabolized into glutamate with the help of the 5-oxoprolinase enzyme (OPLAH or OXP1). The inhibition of 5-oxoprolinase enzyme in mammals was found to result in heart failure and is thought to be a consequence of oxidative stress [1]. To analyze the consequences of 5-oxoproline accumulation more clearly, we are generating models for 5-oxoproline accumulation in yeast. The 5-oxoproline accumulation model in yeast is being developed by two different strategies. The first one is by overexpression of the mouse  -glutamylcyclotransferase enzyme. It degrades -glu-met dipeptide into 5-oxoproline and methionine taken by the cell from the medium. The second strategy is by providing high concentration of 5-oxoproline externally to the yeast cells. The intracellular 5-oxoproline levels in both models are being evaluated. In addition, the metabolic and cellular consequences are being investigated.

Keywords: 5-oxoproline, pyroglutamic acid, yeast, genetics

Procedia PDF Downloads 82
5887 Detecting Earnings Management via Statistical and Neural Networks Techniques

Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie

Abstract:

Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.

Keywords: earnings management, generalized linear regression, neural networks multi-layer perceptron, Tehran stock exchange

Procedia PDF Downloads 420
5886 Hand Motion Trajectory Analysis for Dynamic Hand Gestures Used in Indian Sign Language

Authors: Daleesha M. Viswanathan, Sumam Mary Idicula

Abstract:

Dynamic hand gestures are an intrinsic component in sign language communication. Extracting spatial temporal features of the hand gesture trajectory plays an important role in a dynamic gesture recognition system. Finding a discrete feature descriptor for the motion trajectory based on the orientation feature is the main concern of this paper. Kalman filter algorithm and Hidden Markov Models (HMM) models are incorporated with this recognition system for hand trajectory tracking and for spatial temporal classification, respectively.

Keywords: orientation features, discrete feature vector, HMM., Indian sign language

Procedia PDF Downloads 366
5885 Multi-Period Portfolio Optimization Using Predictive Machine Learning Models

Authors: Peng Liu, Chyng Wen Tee, Xiaofei Xu

Abstract:

This paper integrates machine learning forecasting techniques into the multi-period portfolio optimization framework, enabling dynamic asset allocation based on multiple future periods. We explore both theoretical foundations and practical applications, employing diverse machine learning models for return forecasting. This comprehensive guide demonstrates the superiority of multi-period optimization over single-period approaches, particularly in risk mitigation through strategic rebalancing and enhanced market trend forecasting. Our goal is to promote wider adoption of multi-period optimization, providing insights that can significantly enhance the decision-making capabilities of practitioners and researchers alike.

Keywords: multi-period portfolio optimization, look-ahead constrained optimization, machine learning, sequential decision making

Procedia PDF Downloads 47
5884 Smart Defect Detection in XLPE Cables Using Convolutional Neural Networks

Authors: Tesfaye Mengistu

Abstract:

Power cables play a crucial role in the transmission and distribution of electrical energy. As the electricity generation, transmission, distribution, and storage systems become smarter, there is a growing emphasis on incorporating intelligent approaches to ensure the reliability of power cables. Various types of electrical cables are employed for transmitting and distributing electrical energy, with cross-linked polyethylene (XLPE) cables being widely utilized due to their exceptional electrical and mechanical properties. However, insulation defects can occur in XLPE cables due to subpar manufacturing techniques during production and cable joint installation. To address this issue, experts have proposed different methods for monitoring XLPE cables. Some suggest the use of interdigital capacitive (IDC) technology for online monitoring, while others propose employing continuous wave (CW) terahertz (THz) imaging systems to detect internal defects in XLPE plates used for power cable insulation. In this study, we have developed models that employ a custom dataset collected locally to classify the physical safety status of individual power cables. Our models aim to replace physical inspections with computer vision and image processing techniques to classify defective power cables from non-defective ones. The implementation of our project utilized the Python programming language along with the TensorFlow package and a convolutional neural network (CNN). The CNN-based algorithm was specifically chosen for power cable defect classification. The results of our project demonstrate the effectiveness of CNNs in accurately classifying power cable defects. We recommend the utilization of similar or additional datasets to further enhance and refine our models. Additionally, we believe that our models could be used to develop methodologies for detecting power cable defects from live video feeds. We firmly believe that our work makes a significant contribution to the field of power cable inspection and maintenance. Our models offer a more efficient and cost-effective approach to detecting power cable defects, thereby improving the reliability and safety of power grids.

Keywords: artificial intelligence, computer vision, defect detection, convolutional neural net

Procedia PDF Downloads 111
5883 Prediction on Housing Price Based on Deep Learning

Authors: Li Yu, Chenlu Jiao, Hongrun Xin, Yan Wang, Kaiyang Wang

Abstract:

In order to study the impact of various factors on the housing price, we propose to build different prediction models based on deep learning to determine the existing data of the real estate in order to more accurately predict the housing price or its changing trend in the future. Considering that the factors which affect the housing price vary widely, the proposed prediction models include two categories. The first one is based on multiple characteristic factors of the real estate. We built Convolution Neural Network (CNN) prediction model and Long Short-Term Memory (LSTM) neural network prediction model based on deep learning, and logical regression model was implemented to make a comparison between these three models. Another prediction model is time series model. Based on deep learning, we proposed an LSTM-1 model purely regard to time series, then implementing and comparing the LSTM model and the Auto-Regressive and Moving Average (ARMA) model. In this paper, comprehensive study of the second-hand housing price in Beijing has been conducted from three aspects: crawling and analyzing, housing price predicting, and the result comparing. Ultimately the best model program was produced, which is of great significance to evaluation and prediction of the housing price in the real estate industry.

Keywords: deep learning, convolutional neural network, LSTM, housing prediction

Procedia PDF Downloads 304
5882 Predictive Modeling of Student Behavior in Virtual Reality: A Machine Learning Approach

Authors: Gayathri Sadanala, Shibam Pokhrel, Owen Murphy

Abstract:

In the ever-evolving landscape of education, Virtual Reality (VR) environments offer a promising avenue for enhancing student engagement and learning experiences. However, understanding and predicting student behavior within these immersive settings remain challenging tasks. This paper presents a comprehensive study on the predictive modeling of student behavior in VR using machine learning techniques. We introduce a rich data set capturing student interactions, movements, and progress within a VR orientation program. The dataset is divided into training and testing sets, allowing us to develop and evaluate predictive models for various aspects of student behavior, including engagement levels, task completion, and performance. Our machine learning approach leverages a combination of feature engineering and model selection to reveal hidden patterns in the data. We employ regression and classification models to predict student outcomes, and the results showcase promising accuracy in forecasting behavior within VR environments. Furthermore, we demonstrate the practical implications of our predictive models for personalized VR-based learning experiences and early intervention strategies. By uncovering the intricate relationship between student behavior and VR interactions, we provide valuable insights for educators, designers, and developers seeking to optimize virtual learning environments.

Keywords: interaction, machine learning, predictive modeling, virtual reality

Procedia PDF Downloads 140
5881 The Incubation of University Spin-Offs: An Exploratory Study of a Deep Tech Venture

Authors: Jerome D. Donovan

Abstract:

The pandemic has resulted in a dramatic re-consideration of the reliance on international student fees to support university models in Australia. A key resulting initiative for the Australian Federal Government has been shifting the way universities consider their research model, emphasising the importance of commercialising research. This study specifically examines this shift from the perspective of a university spin-off, examining how university support structures and incubation models have assisted in the translation of fundamental research into a high-growth university spin-off. A focused case study approach is adopted in this study, using an auto-ethnographic research method to document the experiences and insights drawn from being a co-founder in a university spin-off in a time where research commercialisation has emerged as a central focus in Australian universities.

Keywords: research commercialisation, spin-offs, university incubation, entrepreneurship

Procedia PDF Downloads 80
5880 Behavior Consistency Analysis for Workflow Nets Based on Branching Processes

Authors: Wang Mimi, Jiang Changjun, Liu Guanjun, Fang Xianwen

Abstract:

Loop structure often appears in the business process modeling, analyzing the consistency of corresponding workflow net models containing loop structure is a problem, the existing behavior consistency methods cannot analyze effectively the process models with the loop structure. In the paper, by analyzing five kinds of behavior relations of transitions, a three-dimensional figure and two-dimensional behavior relation matrix are proposed. Based on this, analysis method of behavior consistency of business process based on Petri net branching processes is proposed. Finally, an example is given out, which shows the method is effective.

Keywords: workflow net, behavior consistency measures, loop, branching process

Procedia PDF Downloads 386
5879 Comparison of Two-Phase Critical Flow Models for Estimation of Leak Flow Rate through Cracks

Authors: Tadashi Watanabe, Jinya Katsuyama, Akihiro Mano

Abstract:

The estimation of leak flow rates through narrow cracks in structures is of importance for nuclear reactor safety, since the leak flow could be detected before occurrence of loss-of-coolant accidents. The two-phase critical leak flow rates are calculated using the system analysis code, and two representative non-homogeneous critical flow models, Henry-Fauske model and Ransom-Trapp model, are compared. The pressure decrease and vapor generation in the crack, and the leak flow rates are found to be larger for the Henry-Fauske model. It is shown that the leak flow rates are not affected by the structural temperature, but affected largely by the roughness of crack surface.

Keywords: crack, critical flow, leak, roughness

Procedia PDF Downloads 178
5878 Knowledge Co-Production on Future Climate-Change-Induced Mass-Movement Risks in Alpine Regions

Authors: Elisabeth Maidl

Abstract:

The interdependence of climate change and natural hazard goes along with large uncertainties regarding future risks. Regional stakeholders, experts in natural hazards management and scientists have specific knowledge, resp. mental models on such risks. This diversity of views makes it difficult to find common and broadly accepted prevention measures. If the specific knowledge of these types of actors is shared in an interactive knowledge production process, this enables a broader and common understanding of complex risks and allows to agree on long-term solution strategies. Previous studies on mental models confirm that actors with specific vulnerabilities perceive different aspects of a topic and accordingly prefer different measures. In bringing these perspectives together, there is the potential to reduce uncertainty and to close blind spots in solution finding. However, studies that examine the mental models of regional actors on future concrete mass movement risks are lacking so far. The project tests and evaluates the feasibility of knowledge co-creation for the anticipatory prevention of climate change-induced mass movement risks in the Alps. As a key element, mental models of the three included groups of actors are compared. Being integrated into the research program Climate Change Impacts on Alpine Mass Movements (CCAMM2), this project is carried out in two Swiss mountain regions. The project is structured in four phases: 1) the preparatory phase, in which the participants are identified, 2) the baseline phase, in which qualitative interviews and a quantitative pre-survey are conducted with actors 3) the knowledge-co-creation phase, in which actors have a moderated exchange meeting, and a participatory modelling workshop on specific risks in the region, and 4) finally a public information event. Results show that participants' mental models are based on the place of origin, profession, believes, values, which results in narratives on climate change and hazard risks. Further, the more intensively participants interact with each other, the more likely is that they change their views. This provides empirical evidence on how changes in opinions and mindsets can be induced and fostered.

Keywords: climate change, knowledge-co-creation, participatory process, natural hazard risks

Procedia PDF Downloads 67