Search results for: computational model(s)
7249 A New Mathematical Model of Human Olfaction
Authors: H. Namazi, H. T. N. Kuan
Abstract:
It is known that in humans, the adaptation to a given odor occurs within a quite short span of time (typically one minute) after the odor is presented to the brain. Different models of human olfaction have been developed by scientists but none of these models consider the diffusion phenomenon in olfaction. A novel microscopic model of the human olfaction is presented in this paper. We develop this model by incorporating the transient diffusivity. In fact, the mathematical model is written based on diffusion of the odorant within the mucus layer. By the use of the model developed in this paper, it becomes possible to provide quantification of the objective strength of odor.Keywords: diffusion, microscopic model, mucus layer, olfaction
Procedia PDF Downloads 5047248 Analytical and Numerical Results for Free Vibration of Laminated Composites Plates
Authors: Mohamed Amine Ben Henni, Taher Hassaine Daouadji, Boussad Abbes, Yu Ming Li, Fazilay Abbes
Abstract:
The reinforcement and repair of concrete structures by bonding composite materials have become relatively common operations. Different types of composite materials can be used: carbon fiber reinforced polymer (CFRP), glass fiber reinforced polymer (GFRP) as well as functionally graded material (FGM). The development of analytical and numerical models describing the mechanical behavior of structures in civil engineering reinforced by composite materials is necessary. These models will enable engineers to select, design, and size adequate reinforcements for the various types of damaged structures. This study focuses on the free vibration behavior of orthotropic laminated composite plates using a refined shear deformation theory. In these models, the distribution of transverse shear stresses is considered as parabolic satisfying the zero-shear stress condition on the top and bottom surfaces of the plates without using shear correction factors. In this analysis, the equation of motion for simply supported thick laminated rectangular plates is obtained by using the Hamilton’s principle. The accuracy of the developed model is demonstrated by comparing our results with solutions derived from other higher order models and with data found in the literature. Besides, a finite-element analysis is used to calculate the natural frequencies of laminated composite plates and is compared with those obtained by the analytical approach.Keywords: composites materials, laminated composite plate, finite-element analysis, free vibration
Procedia PDF Downloads 2907247 The Analysis of Deceptive and Truthful Speech: A Computational Linguistic Based Method
Authors: Seham El Kareh, Miramar Etman
Abstract:
Recently, detecting liars and extracting features which distinguish them from truth-tellers have been the focus of a wide range of disciplines. To the author’s best knowledge, most of the work has been done on facial expressions and body gestures but only few works have been done on the language used by both liars and truth-tellers. This paper sheds light on four axes. The first axis copes with building an audio corpus for deceptive and truthful speech for Egyptian Arabic speakers. The second axis focuses on examining the human perception of lies and proving our need for computational linguistic-based methods to extract features which characterize truthful and deceptive speech. The third axis is concerned with building a linguistic analysis program that could extract from the corpus the inter- and intra-linguistic cues for deceptive and truthful speech. The program built here is based on selected categories from the Linguistic Inquiry and Word Count program. Our results demonstrated that Egyptian Arabic speakers on one hand preferred to use first-person pronouns and present tense compared to the past tense when lying and their lies lacked of second-person pronouns, and on the other hand, when telling the truth, they preferred to use the verbs related to motion and the nouns related to time. The results also showed that there is a need for bigger data to prove the significance of words related to emotions and numbers.Keywords: Egyptian Arabic corpus, computational analysis, deceptive features, forensic linguistics, human perception, truthful features
Procedia PDF Downloads 2047246 Image Captioning with Vision-Language Models
Authors: Promise Ekpo Osaine, Daniel Melesse
Abstract:
Image captioning is an active area of research in the multi-modal artificial intelligence (AI) community as it connects vision and language understanding, especially in settings where it is required that a model understands the content shown in an image and generates semantically and grammatically correct descriptions. In this project, we followed a standard approach to a deep learning-based image captioning model, injecting architecture for the encoder-decoder setup, where the encoder extracts image features, and the decoder generates a sequence of words that represents the image content. As such, we investigated image encoders, which are ResNet101, InceptionResNetV2, EfficientNetB7, EfficientNetV2M, and CLIP. As a caption generation structure, we explored long short-term memory (LSTM). The CLIP-LSTM model demonstrated superior performance compared to the encoder-decoder models, achieving a BLEU-1 score of 0.904 and a BLEU-4 score of 0.640. Additionally, among the CNN-LSTM models, EfficientNetV2M-LSTM exhibited the highest performance with a BLEU-1 score of 0.896 and a BLEU-4 score of 0.586 while using a single-layer LSTM.Keywords: multi-modal AI systems, image captioning, encoder, decoder, BLUE score
Procedia PDF Downloads 757245 Two-Dimensional Modeling of Spent Nuclear Fuel Using FLUENT
Authors: Imane Khalil, Quinn Pratt
Abstract:
In a nuclear reactor, an array of fuel rods containing stacked uranium dioxide pellets clad with zircalloy is the heat source for a thermodynamic cycle of energy conversion from heat to electricity. After fuel is used in a nuclear reactor, the assemblies are stored underwater in a spent nuclear fuel pool at the nuclear power plant while heat generation and radioactive decay rates decrease before it is placed in packages for dry storage or transportation. A computational model of a Boiling Water Reactor spent fuel assembly is modeled using FLUENT, the computational fluid dynamics package. Heat transfer simulations were performed on the two-dimensional 9x9 spent fuel assembly to predict the maximum cladding temperature for different input to the FLUENT model. Uncertainty quantification is used to predict the heat transfer and the maximum temperature profile inside the assembly.Keywords: spent nuclear fuel, conduction, heat transfer, uncertainty quantification
Procedia PDF Downloads 2187244 Empirical Analyses of Students’ Self-Concepts and Their Mathematics Achievements
Authors: Adetunji Abiola Olaoye
Abstract:
The study examined the students’ self-concepts and mathematics achievement viz-a-viz the existing three theoretical models: Humanist self-concept (M1), Contemporary self-concept (M2) and Skills development self-concept (M3). As a qualitative research study, it comprised of one research question, which was transformed into hypothesis viz-a-viz the existing theoretical models. Sample to the study comprised of twelve public secondary schools from which twenty-five mathematics teachers, twelve counselling officers and one thousand students of Upper Basic II were selected based on intact class as school administrations and system did not allow for randomization. Two instruments namely 10 items ‘Achievement test in Mathematics’ (r1=0.81) and 10 items Student’s self-concept questionnaire (r2=0.75) were adapted, validated and used for the study. Data were analysed through descriptive, one way ANOVA, t-test and correlation statistics at 5% level of significance. Finding revealed mean and standard deviation of pre-achievement test scores of (51.322, 16.10), (54.461, 17.85) and (56.451, 18.22) for the Humanist Self-Concept, Contemporary Self-Concept and Skill Development Self-Concept respectively. Apart from that study showed that there was significant different in the academic performance of students along the existing models (F-cal>F-value, df = (2,997); P<0.05). Furthermore, study revealed students’ achievement in mathematics and self-concept questionnaire with the mean and standard deviation of (57.4, 11.35) and (81.6, 16.49) respectively. Result confirmed an affirmative relationship with the Contemporary Self-Concept model that expressed an individual subject and specific self-concept as the primary determinants of higher academic achievement in the subject as there is a statistical correlation between students’ self-concept and mathematics achievement viz-a-viz the existing three theoretical models of Contemporary (M2) with -Z_cal<-Z_val, df=998: P<0.05*. The implication of the study was discussed with recommendations and suggestion for further studies proffered.Keywords: contemporary, humanists, self-concepts, skill development
Procedia PDF Downloads 2367243 Optimized Text Summarization Model on Mobile Screens for Sight-Interpreters: An Empirical Study
Authors: Jianhua Wang
Abstract:
To obtain key information quickly from long texts on small screens of mobile devices, sight-interpreters need to establish optimized summarization model for fast information retrieval. Four summarization models based on previous studies were studied including title+key words (TKW), title+topic sentences (TTS), key words+topic sentences (KWTS) and title+key words+topic sentences (TKWTS). Psychological experiments were conducted on the four models for three different genres of interpreting texts to establish the optimized summarization model for sight-interpreters. This empirical study shows that the optimized summarization model for sight-interpreters to quickly grasp the key information of the texts they interpret is title+key words (TKW) for cultural texts, title+key words+topic sentences (TKWTS) for economic texts and topic sentences+key words (TSKW) for political texts.Keywords: different genres, mobile screens, optimized summarization models, sight-interpreters
Procedia PDF Downloads 3137242 Thermal Effect on Wave Interaction in Composite Structures
Authors: R. K. Apalowo, D. Chronopoulos, V. Thierry
Abstract:
There exist a wide range of failure modes in composite structures due to the increased usage of the structures especially in aerospace industry. Moreover, temperature dependent wave response of composite and layered structures have been continuously studied, though still limited, in the last decade mainly due to the broad operating temperature range of aerospace structures. A wave finite element (WFE) and finite element (FE) based computational method is presented by which the temperature dependent wave dispersion characteristics and interaction phenomenon in composite structures can be predicted. Initially, the temperature dependent mechanical properties of the panel in the range of -100 ◦C to 150 ◦C are measured experimentally using the Thermal Mechanical Analysis (TMA). Temperature dependent wave dispersion characteristics of each waveguide of the structural system, which is discretized as a system of a number of waveguides coupled by a coupling element, is calculated using the WFE approach. The wave scattering properties, as a function of temperature, is determined by coupling the WFE wave characteristics models of the waveguides with the full FE modelling of the coupling element on which defect is included. Numerical case studies are exhibited for two waveguides coupled through a coupling element.Keywords: finite element, temperature dependency, wave dispersion characteristics, wave finite element, wave scattering properties
Procedia PDF Downloads 3077241 Model Observability – A Monitoring Solution for Machine Learning Models
Authors: Amreth Chandrasehar
Abstract:
Machine Learning (ML) Models are developed and run in production to solve various use cases that help organizations to be more efficient and help drive the business. But this comes at a massive development cost and lost business opportunities. According to the Gartner report, 85% of data science projects fail, and one of the factors impacting this is not paying attention to Model Observability. Model Observability helps the developers and operators to pinpoint the model performance issues data drift and help identify root cause of issues. This paper focuses on providing insights into incorporating model observability in model development and operationalizing it in production.Keywords: model observability, monitoring, drift detection, ML observability platform
Procedia PDF Downloads 1107240 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel
Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler
Abstract:
Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process
Procedia PDF Downloads 1347239 An Application of Sinc Function to Approximate Quadrature Integrals in Generalized Linear Mixed Models
Authors: Altaf H. Khan, Frank Stenger, Mohammed A. Hussein, Reaz A. Chaudhuri, Sameera Asif
Abstract:
This paper discusses a novel approach to approximate quadrature integrals that arise in the estimation of likelihood parameters for the generalized linear mixed models (GLMM) as well as Bayesian methodology also requires computation of multidimensional integrals with respect to the posterior distributions in which computation are not only tedious and cumbersome rather in some situations impossible to find solutions because of singularities, irregular domains, etc. An attempt has been made in this work to apply Sinc function based quadrature rules to approximate intractable integrals, as there are several advantages of using Sinc based methods, for example: order of convergence is exponential, works very well in the neighborhood of singularities, in general quite stable and provide high accurate and double precisions estimates. The Sinc function based approach seems to be utilized first time in statistical domain to our knowledge, and it's viability and future scopes have been discussed to apply in the estimation of parameters for GLMM models as well as some other statistical areas.Keywords: generalized linear mixed model, likelihood parameters, qudarature, Sinc function
Procedia PDF Downloads 3937238 Landscape Genetic and Species Distribution Modeling of Date Palm (Phoenix dactylifera L.)
Authors: Masoud Sheidaei, Fahimeh Koohdar
Abstract:
Date palms are economically important tree plants with high nutrition and medicinal values. More than 400 date palm cultivars are cultivated in many regions of Iran, but no report is available on landscape genetics and species distribution modeling of these trees from the country. Therefore, the present study provides a detailed insight into the genetic diversity and structure of date palm populations in Iran and investigates the effects of geographical and climatic variables on the structuring of genetic diversity in them. We used different computational methods in the study like, spatial principal components analysis (sPCA), redundancy analysis (RDA), latent factor mixed model (LFMM), and Maxent and Dismo models of species distribution modeling. We used a combination of different molecular markers for this study. The results showed that both global and local spatial features play an important role in the genetic structuring of date palms, and the genetic regions associated with local adaptation and climatic variables were identified. The effects of climatic change on the distribution of these taxa and the genetic regions adaptive to these changes will be discussed.Keywords: adaptive genetic regions, genetic diversity, isolation by distance, populations divergence
Procedia PDF Downloads 1067237 Effects of Front Porch and Loft on Indoor Ventilation in the Renewal of Beijing Courtyard
Authors: Zhongzhong Zeng, Zichen Liang
Abstract:
In recent years, Beijing courtyards have been facing the problem of renewal and renovation, and the residents are faced with the problems of small house areas, large household sizes, old and dangerous houses, etc. Among the many renovation methods, the authors note two more common practices of using the front porch to expand the floor area and adding a loft. Residents and architects, however, did not give the ventilation performance of the significant interior consideration before beginning the remodeling. The aim of this article is to explore the good or negative impacts of both front porch and loft structures on the manner of interior ventilation in the courtyard. Ventilation, in turn, is crucial to the indoor environmental quality of a home. The major method utilized in this study is the comparative analysis method, in which the authors create four alternative house models with or without a front porch and an attic as two variables and examine internal ventilation using the CFD(Computational Fluid Dynamics) technique. The authors compare the indoor ventilation of four different architectural models with or without front porches and lofts as two variables. The results obtained from the analysis of the sectional airflow and the plane 1.5m height cloud are the existence of the loft, to a certain extent, disrupts the airflow organization of the building and makes the rear wall high windows of the building less effective. Occupying the front porch to become the area of the house has no significant effect on ventilation, but try not to occupy the front porch and add the loft at the same time in the building renovation. The findings of this study led to the following recommendations: strive to preserve the courtyard building's original architectural design and make adjustments to only the inappropriate elements or constructions. The ventilation in the loft portion is inadequate, and the inhabitants typically use the loft as a living area. This may lead to the building relying more on air conditioning in the summer, which would raise energy demand. The front porch serves as a transition place as well as a source of shade, weather protection, and inside ventilation. In conclusion, the examination of interior environments in upcoming studies should concentrate on cross-disciplinary, multi-angle, and multi-level research topics.Keywords: Beijing courtyard renewal, CFD, indoor environment, ventilation analysis
Procedia PDF Downloads 807236 Co-payment Strategies for Chronic Medications: A Qualitative and Comparative Analysis at European Level
Authors: Pedro M. Abreu, Bruno R. Mendes
Abstract:
The management of pharmacotherapy and the process of dispensing medicines is becoming critical in clinical pharmacy due to the increase of incidence and prevalence of chronic diseases, the complexity and customization of therapeutic regimens, the introduction of innovative and more expensive medicines, the unbalanced relation between expenditure and revenue as well as due to the lack of rationalization associated with medication use. For these reasons, co-payments emerged in Europe in the 70s and have been applied over the past few years in healthcare. Co-payments lead to a rationing and rationalization of user’s access under healthcare services and products, and simultaneously, to a qualification and improvement of the services and products for the end-user. This analysis, under hospital practices particularly and co-payment strategies in general, was carried out on all the European regions and identified four reference countries, that apply repeatedly this tool and with different approaches. The structure, content and adaptation of European co-payments were analyzed through 7 qualitative attributes and 19 performance indicators, and the results expressed in a scorecard, allowing to conclude that the German models (total score of 68,2% and 63,6% in both elected co-payments) can collect more compliance and effectiveness, the English models (total score of 50%) can be more accessible, and the French models (total score of 50%) can be more adequate to the socio-economic and legal framework. Other European models did not show the same quality and/or performance, so were not taken as a standard in the future design of co-payments strategies. In this sense, we can see in the co-payments a strategy not only to moderate the consumption of healthcare products and services, but especially to improve them, as well as a strategy to increment the value that the end-user assigns to these services and products, such as medicines.Keywords: clinical pharmacy, co-payments, healthcare, medicines
Procedia PDF Downloads 2507235 Fuzzy-Machine Learning Models for the Prediction of Fire Outbreak: A Comparative Analysis
Authors: Uduak Umoh, Imo Eyoh, Emmauel Nyoho
Abstract:
This paper compares fuzzy-machine learning algorithms such as Support Vector Machine (SVM), and K-Nearest Neighbor (KNN) for the predicting cases of fire outbreak. The paper uses the fire outbreak dataset with three features (Temperature, Smoke, and Flame). The data is pre-processed using Interval Type-2 Fuzzy Logic (IT2FL) algorithm. Min-Max Normalization and Principal Component Analysis (PCA) are used to predict feature labels in the dataset, normalize the dataset, and select relevant features respectively. The output of the pre-processing is a dataset with two principal components (PC1 and PC2). The pre-processed dataset is then used in the training of the aforementioned machine learning models. K-fold (with K=10) cross-validation method is used to evaluate the performance of the models using the matrices – ROC (Receiver Operating Curve), Specificity, and Sensitivity. The model is also tested with 20% of the dataset. The validation result shows KNN is the better model for fire outbreak detection with an ROC value of 0.99878, followed by SVM with an ROC value of 0.99753.Keywords: Machine Learning Algorithms , Interval Type-2 Fuzzy Logic, Fire Outbreak, Support Vector Machine, K-Nearest Neighbour, Principal Component Analysis
Procedia PDF Downloads 1797234 Impact of Artificial Intelligence Technologies on Information-Seeking Behaviors and the Need for a New Information Seeking Model
Authors: Mohammed Nasser Al-Suqri
Abstract:
Former information-seeking models are proposed more than two decades ago. These already existed models were given prior to the evolution of digital information era and Artificial Intelligence (AI) technologies. Lack of current information seeking models within Library and Information Studies resulted in fewer advancements for teaching students about information-seeking behaviors, design of library tools and services. In order to better facilitate the aforementioned concerns, this study aims to propose state-of-the-art model while focusing on the information seeking behavior of library users in the Sultanate of Oman. This study aims for the development, designing and contextualizing the real-time user-centric information seeking model capable of enhancing information needs and information usage along with incorporating critical insights for the digital library practices. Another aim is to establish far-sighted and state-of-the-art frame of reference covering Artificial Intelligence (AI) while synthesizing digital resources and information for optimizing information-seeking behavior. The proposed study is empirically designed based on a mix-method process flow, technical surveys, in-depth interviews, focus groups evaluations and stakeholder investigations. The study data pool is consist of users and specialist LIS staff at 4 public libraries and 26 academic libraries in Oman. The designed research model is expected to facilitate LIS by assisting multi-dimensional insights with AI integration for redefining the information-seeking process, and developing a technology rich model.Keywords: artificial intelligence, information seeking, information behavior, information seeking models, libraries, Sultanate of Oman
Procedia PDF Downloads 1157233 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications
Authors: H. Hruschka
Abstract:
This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models
Procedia PDF Downloads 1997232 Static vs. Stream Mining Trajectories Similarity Measures
Authors: Musaab Riyadh, Norwati Mustapha, Dina Riyadh
Abstract:
Trajectory similarity can be defined as the cost of transforming one trajectory into another based on certain similarity method. It is the core of numerous mining tasks such as clustering, classification, and indexing. Various approaches have been suggested to measure similarity based on the geometric and dynamic properties of trajectory, the overlapping between trajectory segments, and the confined area between entire trajectories. In this article, an evaluation of these approaches has been done based on computational cost, usage memory, accuracy, and the amount of data which is needed in advance to determine its suitability to stream mining applications. The evaluation results show that the stream mining applications support similarity methods which have low computational cost and memory, single scan on data, and free of mathematical complexity due to the high-speed generation of data.Keywords: global distance measure, local distance measure, semantic trajectory, spatial dimension, stream data mining
Procedia PDF Downloads 3927231 Elastoplastic and Ductile Damage Model Calibration of Steels for Bolt-Sphere Joints Used in China’s Space Structure Construction
Authors: Huijuan Liu, Fukun Li, Hao Yuan
Abstract:
The bolted spherical node is a common type of joint in space steel structures. The bolt-sphere joint portion almost always controls the bearing capacity of the bolted spherical node. The investigation of the bearing performance and progressive failure in service often requires high-fidelity numerical models. This paper focuses on the constitutive models of bolt steel and sphere steel used in China’s space structure construction. The elastoplastic model is determined by a standard tensile test and calibrated Voce saturated hardening rule. The ductile damage is found dominant based on the fractography analysis. Then Rice-Tracey ductile fracture rule is selected and the model parameters are calibrated based on tensile tests of notched specimens. These calibrated material models can benefit research or engineering work in similar fields.Keywords: bolt-sphere joint, steel, constitutive model, ductile damage, model calibration
Procedia PDF Downloads 1357230 Numerical Investigation of Pressure and Velocity Field Contours of Dynamics of Drop Formation
Authors: Pardeep Bishnoi, Mayank Srivastava, Mrityunjay Kumar Sinha
Abstract:
This article represents the numerical investigation of the pressure and velocity field variation of the dynamics of pendant drop formation through a capillary tube. Numerical simulations are executed using volume of fluid (VOF) method in the computational fluid dynamics (CFD). In this problem, Non Newtonian fluid is considered as dispersed fluid whereas air is considered as a continuous fluid. Pressure contours at various time steps expose that pressure varies nearly hydrostatically at each step of the dynamics of drop formation. A result also shows the pressure variation of the liquid droplet during free fall in the computational domain. The evacuation of the fluid from the necking region is also shown by the contour of the velocity field. The role of surface tension in the Pressure contour of the dynamics of drop formation is also studied.Keywords: pressure contour, surface tension, volume of fluid, velocity field
Procedia PDF Downloads 4037229 Modeling Core Flooding Experiments for Co₂ Geological Storage Applications
Authors: Avinoam Rabinovich
Abstract:
CO₂ geological storage is a proven technology for reducing anthropogenic carbon emissions, which is paramount for achieving the ambitious net zero emissions goal. Core flooding experiments are an important step in any CO₂ storage project, allowing us to gain information on the flow of CO₂ and brine in the porous rock extracted from the reservoir. This information is important for understanding basic mechanisms related to CO₂ geological storage as well as for reservoir modeling, which is an integral part of a field project. In this work, a different method for constructing accurate models of CO₂-brine core flooding will be presented. Results for synthetic cases and real experiments will be shown and compared with numerical models to exhibit their predictive capabilities. Furthermore, the various mechanisms which impact the CO₂ distribution and trapping in the rock samples will be discussed, and examples from models and experiments will be provided. The new method entails solving an inverse problem to obtain a three-dimensional permeability distribution which, along with the relative permeability and capillary pressure functions, constitutes a model of the flow experiments. The model is more accurate when data from a number of experiments are combined to solve the inverse problem. This model can then be used to test various other injection flow rates and fluid fractions which have not been tested in experiments. The models can also be used to bridge the gap between small-scale capillary heterogeneity effects (sub-core and core scale) and large-scale (reservoir scale) effects, known as the upscaling problem.Keywords: CO₂ geological storage, residual trapping, capillary heterogeneity, core flooding, CO₂-brine flow
Procedia PDF Downloads 667228 Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building
Authors: A. Schuchter, M. Promegger
Abstract:
The outbreak of the COVID-19 pandemic has shown us that online education is so much more than just a cool feature for teachers – it is an essential part of modern teaching. In online math teaching, it is common to use tools to share screens, compute and calculate mathematical examples, while the students can watch the process. On the other hand, flipped classroom models are on the rise, with their focus on how students can gather knowledge by watching videos and on the teacher’s use of technological tools for information transfer. This paper proposes a co-educational teaching approach for coding and engineering subjects with the help of drone-building to spark interest in technology and create a platform for knowledge transfer. The project combines aspects from mathematics (matrices, vectors, shaders, trigonometry), physics (force, pressure and rotation) and coding (computational thinking, block-based programming, JavaScript and Python) and makes use of collaborative-shared 3D Modeling with clara.io, where students create mathematics knowhow. The instructor follows a problem-based learning approach and encourages their students to find solutions in their own time and in their own way, which will help them develop new skills intuitively and boost logically structured thinking. The collaborative aspect of working in groups will help the students develop communication skills as well as structural and computational thinking. Students are not just listeners as in traditional classroom settings, but play an active part in creating content together by compiling a Handbook of Knowledge (called “open book”) with examples and solutions. Before students start calculating, they have to write down all their ideas and working steps in full sentences so other students can easily follow their train of thought. Therefore, students will learn to formulate goals, solve problems, and create a ready-to use product with the help of “reverse engineering”, cross-referencing and creative thinking. The work on drones gives the students the opportunity to create a real-life application with a practical purpose, while going through all stages of product development.Keywords: flipped classroom, co-creational education, coding, making, drones, co-education, ARCS-model, problem-based learning
Procedia PDF Downloads 1197227 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 1547226 Understanding the Role of Gas Hydrate Morphology on the Producibility of a Hydrate-Bearing Reservoir
Authors: David Lall, Vikram Vishal, P. G. Ranjith
Abstract:
Numerical modeling of gas production from hydrate-bearing reservoirs requires the solution of various thermal, hydrological, chemical, and mechanical phenomena in a coupled manner. Among the various reservoir properties that influence gas production estimates, the distribution of permeability across the domain is one of the most crucial parameters since it determines both heat transfer and mass transfer. The aspect of permeability in hydrate-bearing reservoirs is particularly complex compared to conventional reservoirs since it depends on the saturation of gas hydrates and hence, is dynamic during production. The dependence of permeability on hydrate saturation is mathematically represented using permeability-reduction models, which are specific to the expected morphology of hydrate accumulations (such as grain-coating or pore-filling hydrates). In this study, we demonstrate the impact of various permeability-reduction models, and consequently, different morphologies of hydrate deposits on the estimates of gas production using depressurization at the reservoir scale. We observe significant differences in produced water volumes and cumulative mass of produced gas between the models, thereby highlighting the uncertainty in production behavior arising from the ambiguity in the prevalent gas hydrate morphology.Keywords: gas hydrate morphology, multi-scale modeling, THMC, fluid flow in porous media
Procedia PDF Downloads 2187225 Aggregate Production Planning Framework in a Multi-Product Factory: A Case Study
Authors: Ignatio Madanhire, Charles Mbohwa
Abstract:
This study looks at the best model of aggregate planning activity in an industrial entity and uses the trial and error method on spreadsheets to solve aggregate production planning problems. Also linear programming model is introduced to optimize the aggregate production planning problem. Application of the models in a furniture production firm is evaluated to demonstrate that practical and beneficial solutions can be obtained from the models. Finally some benchmarking of other furniture manufacturing industries was undertaken to assess relevance and level of use in other furniture firmsKeywords: aggregate production planning, trial and error, linear programming, furniture industry
Procedia PDF Downloads 5557224 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1217223 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models
Authors: I. V. Pinto, M. R. Sooriyarachchi
Abstract:
It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error
Procedia PDF Downloads 1427222 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures
Procedia PDF Downloads 3537221 Storage Assignment Strategies to Reduce Manual Picking Errors with an Emphasis on an Ageing Workforce
Authors: Heiko Diefenbach, Christoph H. Glock
Abstract:
Order picking, i.e., the order-based retrieval of items in a warehouse, is an important time- and cost-intensive process for many logistic systems. Despite the ongoing trend of automation, most order picking systems are still manual picker-to-parts systems, where human pickers walk through the warehouse to collect ordered items. Human work in warehouses is not free from errors, and order pickers may at times pick the wrong or the incorrect number of items. Errors can cause additional costs and significant correction efforts. Moreover, age might increase a person’s likelihood to make mistakes. Hence, the negative impact of picking errors might increase for an aging workforce currently witnessed in many regions globally. A significant amount of research has focused on making order picking systems more efficient. Among other factors, storage assignment, i.e., the assignment of items to storage locations (e.g., shelves) within the warehouse, has been subject to optimization. Usually, the objective is to assign items to storage locations such that order picking times are minimized. Surprisingly, there is a lack of research concerned with picking errors and respective prevention approaches. This paper hypothesize that the storage assignment of items can affect the probability of pick errors. For example, storing similar-looking items apart from one other might reduce confusion. Moreover, storing items that are hard to count or require a lot of counting at easy-to-access and easy-to-comprehend self heights might reduce the probability to pick the wrong number of items. Based on this hypothesis, the paper discusses how to incorporate error-prevention measures into mathematical models for storage assignment optimization. Various approaches with respective benefits and shortcomings are presented and mathematically modeled. To investigate the newly developed models further, they are compared to conventional storage assignment strategies in a computational study. The study specifically investigates how the importance of error prevention increases with pickers being more prone to errors due to age, for example. The results suggest that considering error-prevention measures for storage assignment can reduce error probabilities with only minor decreases in picking efficiency. The results might be especially relevant for an aging workforce.Keywords: an aging workforce, error prevention, order picking, storage assignment
Procedia PDF Downloads 2037220 Using Machine Learning to Classify Different Body Parts and Determine Healthiness
Authors: Zachary Pan
Abstract:
Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.Keywords: body part, healthcare, machine learning, neural networks
Procedia PDF Downloads 103