Search results for: Mathematical modeling
1028 VR in the Middle School Classroom-An Experimental Study on Spatial Relations and Immersive Virtual Reality
Authors: Danielle Schneider, Ying Xie
Abstract:
Middle school science, technology, engineering, and math (STEM) teachers experience an exceptional challenge in the expectation to incorporate curricula that builds strong spatial reasoning skills on rudimentary geometry concepts. Because spatial ability is so closely tied to STEM students’ success, researchers are tasked to determine effective instructional practices that create an authentic learning environment within the immersive virtual reality learning environment (IVRLE). This study looked to investigate the effect of the IVRLE on middle school STEM students’ spatial reasoning skills as a methodology to benefit the STEM middle school students’ spatial reasoning skills. This experimental study was comprised of thirty 7th-grade STEM students divided into a treatment group that was engaged in an immersive VR platform where they engaged in building an object in the virtual realm by applying spatial processing and visualizing its dimensions and a control group that built the identical object using a desktop computer-based, computer-aided design (CAD) program. Before and after the students participated in the respective “3D modeling” environment, their spatial reasoning abilities were assessed using the Middle Grades Mathematics Project Spatial Visualization Test (MGMP-SVT). Additionally, both groups created a physical 3D model as a secondary measure to measure the effectiveness of the IVRLE. The results of a one-way ANOVA in this study identified a negative effect on those in the IVRLE. These findings suggest that with middle school students, virtual reality (VR) proved an inadequate tool to benefit spatial relation skills as compared to desktop-based CAD.Keywords: virtual reality, spatial reasoning, CAD, middle school STEM
Procedia PDF Downloads 851027 Instructional Game in Teaching Algebra for High School Students: Basis for Instructional Intervention
Authors: Jhemson C. Elis, Alvin S. Magadia
Abstract:
Our world is full of numbers, shapes, and figures that illustrate the wholeness of a thing. Indeed, this statement signifies that mathematics is everywhere. Mathematics in its broadest sense helps people in their everyday life that is why in education it is a must to be taken by the students as a subject. The study aims to determine the profile of the respondents in terms of gender and age, performance of the control and experimental groups in the pretest and posttest, impact of the instructional game used as instructional intervention in teaching algebra for high school students, significant difference between the level of performance of the two groups of respondents in their pre–test and post–test results, and the instructional intervention can be proposed. The descriptive method was also utilized in this study. The use of the certain approach was to that it corresponds to the main objective of this research that is to determine the effectiveness of the instructional game used as an instructional intervention in teaching algebra for high school students. There were 30 students served as respondents, having an equal size of the sample of 15 each while a greater number of female teacher respondents which totaled 7 or 70 percent and male were 3 or 30 percent. The study recommended that mathematics teacher should conceptualize instructional games for the students to learn mathematics with fun and enjoyment while learning. Mathematics education program supervisor should give training for teachers on how to conceptualize mathematics intervention for the students learning. Meaningful activities must be provided to sustain the student’s interest in learning. Students must be given time to have fun at the classroom through playing while learning since mathematics for them was considered as difficult. Future researcher must continue conceptualizing some mathematics intervention to suffice the needs of the students, and teachers should inculcate more educational games so that the discussion will be successful and joyful.Keywords: instructional game in algebra, mathematical intervention, joyful, successful
Procedia PDF Downloads 5951026 The Hidden Role of Interest Rate Risks in Carry Trades
Authors: Jingwen Shi, Qi Wu
Abstract:
We study the role played interest rate risk in carry trade return in order to understand the forward premium puzzle. In this study, our goal is to investigate to what extent carry trade return is indeed due to compensation for risk taking and, more important, to reveal the nature of these risks. Using option data not only on exchange rates but also on interest rate swaps (swaptions), our first finding is that, besides the consensus currency risks, interest rate risks also contribute a non-negligible portion to the carry trade return. What strikes us is our second finding. We find that large downside risks of future exchange rate movements are, in fact, priced significantly in option market on interest rates. The role played by interest rate risk differs structurally from the currency risk. There is a unique premium associated with interest rate risk, though seemingly small in size, which compensates the tail risks, the left tail to be precise. On the technical front, our study relies on accurately retrieving implied distributions from currency options and interest rate swaptions simultaneously, especially the tail components of the two. For this purpose, our major modeling work is to build a new international asset pricing model where we use an orthogonal setup for pricing kernels and specify non-Gaussian dynamics in order to capture three sets of option skew accurately and consistently across currency options and interest rate swaptions, domestic and foreign, within one model. Our results open a door for studying forward premium anomaly through implied information from interest rate derivative market.Keywords: carry trade, forward premium anomaly, FX option, interest rate swaption, implied volatility skew, uncovered interest rate parity
Procedia PDF Downloads 4441025 Enhancing Embedded System Efficiency with Digital Signal Processing Cores
Authors: Anil H. Dhanawade, Akshay S., Harshal M. Lakesar
Abstract:
This paper presents a comprehensive analysis of the performance advantages offered by DSP (Digital Signal Processing) cores compared to traditional MCU (Microcontroller Unit) cores in the execution of various functions critical to real-time applications. The focus is on the integration of DSP functionalities, specifically in the context of motor control applications such as Field-Oriented Control (FOC), trigonometric calculations, back-EMF estimation, digital filtering, and high-resolution PWM generation. Through comparative analysis, it is demonstrated that DSP cores significantly enhance processing efficiency, achieving faster execution times for complex mathematical operations essential for precise torque and speed control. The study highlights the capabilities of DSP cores, including single-cycle Multiply-Accumulate (MAC) operations and optimized hardware for trigonometric functions, which collectively reduce latency and improve real-time performance. In contrast, MCU cores, while capable of performing similar tasks, typically exhibit longer execution times due to reliance on software-based solutions and lack of dedicated hardware acceleration. The findings underscore the critical role of DSP cores in applications requiring high-speed processing and low-latency response, making them indispensable in the automotive, industrial, and robotics sectors. This work serves as a reference for future developments in embedded systems, emphasizing the importance of architecture choice in achieving optimal performance in demanding computational tasks.Keywords: CPU core, DSP, assembly code, motor control
Procedia PDF Downloads 141024 Exploring Hydrogen Embrittlement and Fatigue Crack Growth in API 5L X52 Steel Pipeline Under Cyclic Internal Pressure
Authors: Omar Bouledroua, Djamel Zelmati, Zahreddine Hafsi, Milos B. Djukic
Abstract:
Transporting hydrogen gas through the existing natural gas pipeline network offers an efficient solution for energy storage and conveyance. Hydrogen generated from excess renewable electricity can be conveyed through the API 5L steel-made pipelines that already exist. In recent years, there has been a growing demand for the transportation of hydrogen through existing gas pipelines. Therefore, numerical and experimental tests are required to verify and ensure the mechanical integrity of the API 5L steel pipelines that will be used for pressurized hydrogen transportation. Internal pressure loading is likely to accelerate hydrogen diffusion through the internal pipe wall and consequently accentuate the hydrogen embrittlement of steel pipelines. Furthermore, pre-cracked pipelines are susceptible to quick failure, mainly under a time-dependent cyclic pressure loading that drives fatigue crack propagation. Meanwhile, after several loading cycles, the initial cracks will propagate to a critical size. At this point, the remaining service life of the pipeline can be estimated, and inspection intervals can be determined. This paper focuses on the hydrogen embrittlement of API 5L steel-made pipeline under cyclic pressure loading. Pressurized hydrogen gas is transported through a network of pipelines where demands at consumption nodes vary periodically. The resulting pressure profile over time is considered a cyclic loading on the internal wall of a pre-cracked pipeline made of API 5L steel-grade material. Numerical modeling has allowed the prediction of fatigue crack evolution and estimation of the remaining service life of the pipeline. The developed methodology in this paper is based on the ASME B31.12 standard, which outlines the guidelines for hydrogen pipelines.Keywords: hydrogen embrittlement, pipelines, transient flow, cyclic pressure, fatigue crack growth
Procedia PDF Downloads 861023 Evaluation of Effectiveness of Three Common Equine Thrush Treatments
Authors: A. S. Strait, J. A. Bryk-Lucy, L. M. Ritchie
Abstract:
Thrush is a common disease of ungulates primarily affecting the frog and sulci, caused by the anaerobic bacteria Fusobacterium necrophorum. Thrush accounts for approximately 45.0% of hoof disorders in horses. Prevention and treatment of thrush are essential to prevent horses from developing severe infections and becoming lame. Proper knowledge of hoof care and thrush treatments is crucial to avoid financial costs, unsoundness and lost training time. Research on the effectiveness of numerous commercial and homemade thrush treatments is limited in the equine industry. The objective of this study was to compare the effectiveness of three common thrush treatments for horses: weekly application of Thrush Buster, daily dilute bleach solution spray, or Metronidazole pastes every other day. Cases of thrush diagnosed by a veterinarian or veterinarian-trained researcher were given a score, from 0 to 4, based on the severity of the thrush in each hoof (n=59) and randomly assigned a treatment. Cases were rescored each week of the three-week treatment, and the final and initial scores were compared to determine effectiveness. The thrush treatments were compared with Thrush Buster as the reference at a significance level of α=.05. Binomial Logistic Regression Modeling was performed, finding that the odds of a hoof treated with Metronidazole to be thrush-free was 6.1 times greater than a hoof treated with Thrush Buster (p=0.001), while the odds of a hoof that was treated with bleach to be thrush-free was only 0.97 times greater than a hoof treated with Thrush Buster (p=0.970), after adjustment for treatment week. Of the three treatments utilized in this study, Metronidazole paste applied to the affected areas every other day was the most effective treatment for thrush in horses. There are many other thrush remedies available, and further research is warranted to determine the efficacy of additional treatment options.Keywords: fusobacterium necrophorum, thrush, equine, horse, lameness
Procedia PDF Downloads 1531022 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes
Authors: V. Churkin, M. Lopatin
Abstract:
The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second –95,3%.Keywords: bass model, generalized bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States
Procedia PDF Downloads 3461021 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue
Authors: Rachel Y. Zhang, Christopher K. Anderson
Abstract:
A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine
Procedia PDF Downloads 1301020 Spatiotemporal Modeling of Under-Five Mortality and Associated Risk Factors in Ethiopia
Authors: Melkamu A. Zeru, Aweke A. Mitiku, Endashaw Amuka
Abstract:
Background: Under-five mortality is the likelihood that a baby will pass away before turning exactly 5 years old, represented as a percentage per 1,000 live births. Exploring the spatial distribution and identifying the temporal pattern is important to reducing under-five child mortality globally, including in Ethiopia. Thus, this study aimed to identify the risk factors of under-five mortality and the spatiotemporal variation in Ethiopian administrative zones. Method: This study used the 2000-2016 Ethiopian Demographic and Health Survey (EDHS) data, which were collected using a two-stage sampling method. A total of 43,029 (10,873 in 2000, 9,861 in 2005, 11,654 in 2011, and 10,641 in 2016) weighted sample under-five child mortality was used. The space-time dynamic model was employed to account for spatial and time effects in 65 administrative zones in Ethiopia. Results: From the result of a general nesting spatial-temporal dynamic model, there was a significant space-time interaction effect [γ = -0.1444, 95 % CI (-0.6680, -0.1355)] for under-five mortality. The increase in the percentages of mothers illiteracy [𝛽 = 0.4501, 95% CI (0.2442, 0.6559)], not vaccinated[𝛽= 0.7681, 95% CI (0.5683, 0.9678)], unimproved water[𝛽= 0.5801, CI (0.3793, 0.7808)] were increased death rates for under five children while increased percentage of contraceptive use [𝛽= -0.6609, 95% CI (-0.8636, -0.4582)] and ANC visit > 4 times [𝛽= -0.1585, 95% CI(-0.1812, -0.1357)] were contributed to the decreased under-five mortality rate at the zone in Ethiopia. Conclusions: Even though the mortality rate for children under five has decreased over time, still there is still higher in different zones of Ethiopia. There exists spatial and temporal variation in under-five mortality among zones. Therefore, it is very important to consider spatial neighbourhoods and temporal context when aiming to avoid under-five mortality.Keywords: under-five children mortality, space-time dynamic, spatiotemporal, Ethiopia
Procedia PDF Downloads 361019 Cognitive Science Based Scheduling in Grid Environment
Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya
Abstract:
Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence
Procedia PDF Downloads 3921018 Comparison of E-learning and Face-to-Face Learning Models Through the Early Design Stage in Architectural Design Education
Authors: Gülay Dalgıç, Gildis Tachir
Abstract:
Architectural design studios are ambiencein where architecture design is realized as a palpable product in architectural education. In the design studios that the architect candidate will use in the design processthe information, the methods of approaching the design problem, the solution proposals, etc., are set uptogetherwith the studio coordinators. The architectural design process, on the other hand, is complex and uncertain.Candidate architects work in a process that starts with abstre and ill-defined problems. This process starts with the generation of alternative solutions with the help of representation tools, continues with the selection of the appropriate/satisfactory solution from these alternatives, and then ends with the creation of an acceptable design/result product. In the studio ambience, many designs and thought relationships are evaluated, the most important step is the early design phase. In the early design phase, the first steps of converting the information are taken, and converted information is used in the constitution of the first design decisions. This phase, which positively affects the progress of the design process and constitution of the final product, is complex and fuzzy than the other phases of the design process. In this context, the aim of the study is to investigate the effects of face-to-face learning model and e-learning model on the early design phase. In the study, the early design phase was defined by literature research. The data of the defined early design phase criteria were obtained with the feedback graphics created for the architect candidates who performed e-learning in the first year of architectural education and continued their education with the face-to-face learning model. The findings of the data were analyzed with the common graphics program. It is thought that this research will contribute to the establishment of a contemporary architectural design education model by reflecting the evaluation of the data and results on architectural education.Keywords: education modeling, architecture education, design education, design process
Procedia PDF Downloads 1351017 Human Resource Information System: Role in HRM Practices and Organizational Performance
Authors: Ejaz Ali M. Phil
Abstract:
Enterprise Resource Planning (ERP) systems are playing a vital role in effective management of business functions in large and complex organizations. Human Resource Information System (HRIS) is a core module of ERP, providing concrete solutions to implement Human Resource Management (HRM) Practices in an innovative and efficient manner. Over the last decade, there has been considerable increase in the studies on HRIS. Nevertheless, previous studies relatively lacked to examine the moderating role of HRIS in performing HRM practices that may affect the firms’ performance. The current study was carried out to examine the impact of HRM practices (training, performance appraisal) on perceived organizational performance, with moderating role of HRIS, where the system is in place. The study based on Resource Based View (RBV) and Ability Motivation Opportunity (AMO) Theories, advocating that strengthening of human capital enables an organization to achieve and sustain competitive advantage which leads to improved organizational performance. Data were collected through structured questionnaire based upon adopted instruments after establishing reliability and validity. The structural equation modeling (SEM) were used to assess the model fitness, hypotheses testing and to establish validity of the instruments through Confirmatory Factor Analysis (CFA). A total 220 employees of 25 firms in corporate sector were sampled through non-probability sampling technique. Path analysis revealing that HRM practices and HRIS have significant positive impact on organizational performance. The results further showed that the HRIS moderated the relationships between training, performance appraisal and organizational performance. The interpretation of the findings and limitations, theoretical and managerial implications are discussed.Keywords: enterprise resource planning, human resource, information system, human capital
Procedia PDF Downloads 3951016 Bridge Members Segmentation Algorithm of Terrestrial Laser Scanner Point Clouds Using Fuzzy Clustering Method
Authors: Donghwan Lee, Gichun Cha, Jooyoung Park, Junkyeong Kim, Seunghee Park
Abstract:
3D shape models of the existing structure are required for many purposes such as safety and operation management. The traditional 3D modeling methods are based on manual or semi-automatic reconstruction from close-range images. It occasions great expense and time consuming. The Terrestrial Laser Scanner (TLS) is a common survey technique to measure quickly and accurately a 3D shape model. This TLS is used to a construction site and cultural heritage management. However there are many limits to process a TLS point cloud, because the raw point cloud is massive volume data. So the capability of carrying out useful analyses is also limited with unstructured 3-D point. Thus, segmentation becomes an essential step whenever grouping of points with common attributes is required. In this paper, members segmentation algorithm was presented to separate a raw point cloud which includes only 3D coordinates. This paper presents a clustering approach based on a fuzzy method for this objective. The Fuzzy C-Means (FCM) is reviewed and used in combination with a similarity-driven cluster merging method. It is applied to the point cloud acquired with Lecia Scan Station C10/C5 at the test bed. The test-bed was a bridge which connects between 1st and 2nd engineering building in Sungkyunkwan University in Korea. It is about 32m long and 2m wide. This bridge was used as pedestrian between two buildings. The 3D point cloud of the test-bed was constructed by a measurement of the TLS. This data was divided by segmentation algorithm for each member. Experimental analyses of the results from the proposed unsupervised segmentation process are shown to be promising. It can be processed to manage configuration each member, because of the segmentation process of point cloud.Keywords: fuzzy c-means (FCM), point cloud, segmentation, terrestrial laser scanner (TLS)
Procedia PDF Downloads 2311015 The Antecedents of Internet Addiction toward Smartphone Usage
Authors: Pui-Lai To, Chechen Liao, Hen-Yi Huang
Abstract:
Twenty years after Internet development, scholars have started to identify the negative impacts brought by the Internet. Overuse of Internet could develop Internet dependency and in turn cause addiction behavior. Therefore understanding the phenomenon of Internet addiction is important. With the joint efforts of experts and scholars, Internet addiction has been officially listed as a symptom that affects public health, and the diagnosis, causes and treatment of the symptom have also been explored. On the other hand, in the area of smartphone Internet usage, most studies are still focusing on the motivation factors of smartphone usage. Not much research has been done on smartphone Internet addiction. In view of the increasing adoption of smartphones, this paper is intended to find out whether smartphone Internet addiction exists in modern society or not. This study adopted the research methodology of online survey targeting users with smartphone Internet experience. A total of 434 effective samples were recovered. In terms of data analysis, Partial Least Square (PLS) in Structural Equation Modeling (SEM) is used for sample analysis and research model testing. Software chosen for statistical analysis is SPSS 20.0 for windows and SmartPLS 2.0. The research result successfully proved that smartphone users who access Internet service via smartphone could also develop smartphone Internet addiction. Factors including flow experience, depression, virtual social support, smartphone Internet affinity and maladaptive cognition all have significant and positive influence on smartphone Internet addiction. In the scenario of smartphone Internet use, descriptive norm has a positive and significant influence on perceived playfulness, while perceived playfulness also has a significant and positive influence on flow experience. Depression, on the other hand, is negatively influenced by actual social support and positive influenced by the virtual social support.Keywords: internet addiction, smartphone usage, social support, perceived playfulness
Procedia PDF Downloads 2441014 Comparative Study of Flood Plain Protection Zone Determination Methodologies in Colombia, Spain and Canada
Authors: P. Chang, C. Lopez, C. Burbano
Abstract:
Flood protection zones are riparian buffers that are formed to manage and mitigate the impact of flooding, and in turn, protect local populations. The purpose of this study was to evaluate the Guía Técnica de Criterios para el Acotamiento de las Rondas Hídricas in Colombia against international regulations in Canada and Spain, in order to determine its limitations and contribute to its improvement. The need to establish a specific corridor that allows for the dynamic development of a river is clear; however, limitations present in the Colombian Technical Guide are identified. The study shows that international regulations provide similar concepts as used in Colombia, but additionally integrate aspects such as regionalization that allows for a better characterization of the channel way, and incorporate the frequency of flooding and its probability of occurrence in the concept of risk when determining the protection zone. The case study analyzed in Dosquebradas - Risaralda aimed at comparing the application of the different standards through hydraulic modeling. It highlights that the current Colombian standard does not offer sufficient details in its implementation phase, which leads to a false sense of security related to inaccuracy and lack of data. Furthermore, the study demonstrates how the Colombian norm is ill-adapted to the conditions of Dosquebradas typical of the Andes region, both in the social and hydraulic aspects, and does not reduce the risk, nor does it improve the protection of the population. Our study considers it pertinent to include risk estimation as an integral part of the methodology when establishing protect flood zone, considering the particularity of water systems, as they are characterized by an heterogeneous natural dynamic behavior.Keywords: environmental corridor, flood zone determination, hydraulic domain, legislation flood protection zone
Procedia PDF Downloads 1121013 Numerical Modeling of Air Shock Wave Generated by Explosive Detonation and Dynamic Response of Structures
Authors: Michał Lidner, Zbigniew SzcześNiak
Abstract:
The ability to estimate blast load overpressure properly plays an important role in safety design of buildings. The issue of studying of blast loading on structural elements has been explored for many years. However, in many literature reports shock wave overpressure is estimated with simplified triangular or exponential distribution in time. This indicates some errors when comparing real and numerical reaction of elements. Nonetheless, it is possible to further improve setting similar to the real blast load overpressure function versus time. The paper presents a method of numerical analysis of the phenomenon of the air shock wave propagation. It uses Finite Volume Method and takes into account energy losses due to a heat transfer with respect to an adiabatic process rule. A system of three equations (conservation of mass, momentum and energy) describes the flow of a volume of gaseous medium in the area remote from building compartments, which can inhibit the movement of gas. For validation three cases of a shock wave flow were analyzed: a free field explosion, an explosion inside a steel insusceptible tube (the 1D case) and an explosion inside insusceptible cube (the 3D case). The results of numerical analysis were compared with the literature reports. Values of impulse, pressure, and its duration were studied. Finally, an overall good convergence of numerical results with experiments was achieved. Also the most important parameters were well reflected. Additionally analyses of dynamic response of one of considered structural element were made.Keywords: adiabatic process, air shock wave, explosive, finite volume method
Procedia PDF Downloads 1901012 Numerical Modeling of Film Cooling of the Surface at Non-Uniform Heat Flux Distributions on the Wall
Authors: M. V. Bartashevich
Abstract:
The problem of heat transfer at thin laminar liquid film is solved numerically. A thin film of liquid flows down an inclined surface under conditions of variable heat flux on the wall. The use of thin films of liquid allows to create the effective technologies for cooling surfaces. However, it is important to investigate the most suitable cooling regimes from a safety point of view, in order, for example, to avoid overheating caused by the ruptures of the liquid film, and also to study the most effective cooling regimes depending on the character of the distribution of the heat flux on the wall, as well as the character of the blowing of the film surface, i.e., the external shear stress on its surface. In the statement of the problem on the film surface, the heat transfer coefficient between the liquid and gas is set, as well as a variable external shear stress - the intensity of blowing. It is shown that the combination of these factors - the degree of uniformity of the distribution of heat flux on the wall and the intensity of blowing, affects the efficiency of heat transfer. In this case, with an increase in the intensity of blowing, the cooling efficiency increases, reaching a maximum, and then decreases. It is also shown that the more uniform the heating of the wall, the more efficient the heat sink. A separate study was made for the flow regime along the horizontal surface when the liquid film moves solely due to external stress influence. For this mode, the analytical solution is used for the temperature at the entrance region for further numerical calculations downstream. Also the influence of the degree of uniformity of the heat flux distribution on the wall and the intensity of blowing of the film surface on the heat transfer efficiency was also studied. This work was carried out at the Kutateladze Institute of Thermophysics SB RAS (Russia) and supported by FASO Russia.Keywords: Heat Flux, Heat Transfer Enhancement, External Blowing, Thin Liquid Film
Procedia PDF Downloads 1481011 Geometrical Analysis of an Atheroma Plaque in Left Anterior Descending Coronary Artery
Authors: Sohrab Jafarpour, Hamed Farokhi, Mohammad Rahmati, Alireza Gholipour
Abstract:
In the current study, a nonlinear fluid-structure interaction (FSI) biomechanical model of atherosclerosis in the left anterior descending (LAD) coronary artery is developed to perform a detailed sensitivity analysis of the geometrical features of an atheroma plaque. In the development of the numerical model, first, a 3D geometry of the diseased artery is developed based on patient-specific dimensions obtained from the experimental studies. The geometry includes four influential geometric characteristics: stenosis ratio, plaque shoulder-length, fibrous cap thickness, and eccentricity intensity. Then, a suitable strain energy density function (SEDF) is proposed based on the detailed material stability analysis to accurately model the hyperelasticity of the arterial walls. The time-varying inlet velocity and outlet pressure profiles are adopted from experimental measurements to incorporate the pulsatile nature of the blood flow. In addition, a computationally efficient type of structural boundary condition is imposed on the arterial walls. Finally, a non-Newtonian viscosity model is implemented to model the shear-thinning behaviour of the blood flow. According to the results, the structural responses in terms of the maximum principal stress (MPS) are affected more compared to the fluid responses in terms of wall shear stress (WSS) as the geometrical characteristics are varying. The extent of these changes is critical in the vulnerability assessment of an atheroma plaque.Keywords: atherosclerosis, fluid-Structure interaction modeling, material stability analysis, and nonlinear biomechanics
Procedia PDF Downloads 861010 Antecedents of Regret and Satisfaction in Electronic Commerce
Authors: Chechen Liao, Pui-Lai To, Chuang-Chun Liu
Abstract:
Online shopping has become very popular recently. In today’s highly competitive online retail environment, retaining existing customers is a necessity for online retailers. This study focuses on the antecedents and consequences of Internet buyer regret and satisfaction in the online consumer purchasing process. This study examines the roles that online consumer’s purchasing process evaluations (i.e., search experience difficulty, service-attribute evaluations, product-attribute evaluations and post-purchase price perceptions) and alternative evaluation (i.e., alternative attractiveness) play in determining buyer regret and satisfaction in e-commerce. The study also examines the consequences of regret, satisfaction and habit in regard to repurchase intention. In addition, this study attempts to investigate the moderating role of habit in attaining a better understanding of the relationship between repurchase intention and its antecedents. Survey data collected from 431 online customers are analyzed using structural equation modeling (SEM) with partial least squares (PLS) and support provided for the hypothesized links. These results indicate that online consumer’s purchasing process evaluations (i.e., search experience difficulty, service-attribute evaluations, product-attribute evaluations and post-purchase price perceptions) have significant influences on regret and satisfaction, which in turn influences repurchase intention. In addition, alternative evaluation (i.e., alternative attractiveness) has a significant positive influence on regret. The research model can provide a richer understanding of online customers’ repurchase behavior and contribute to both research and practice.Keywords: online shopping, purchase evaluation, regret, satisfaction
Procedia PDF Downloads 2821009 An Analysis of Different Essential Components of Flight Plan Operations at Low Altitude
Authors: Apisit Nawapanpong, Natthapat Boonjerm
Abstract:
This project aims to analyze and identify the flight plan of low-altitude aviation in Thailand and other countries. The development of UAV technology has led the innovation and revolution in the aviation industry; this includes the development of new modes of passenger or freight transportation, and it has also affected other industries widely. At present, this technology is being developed rapidly and has been tested all over the world to make the most efficient for technology or innovation, and it is likely to grow more extensively. However, no flight plan for low-altitude operation has been published by the government organization; when compared with high-altitude aviation with manned aircraft, various unique factors are different, whether mission, operation, altitude range or airspace restrictions. In the study of the essential components of low-altitude operation measures to be practical and tangible, there were major problems, so the main consideration of this project is to analyze the components of low-altitude operations which are conducted up to the altitudes of 400 ft or 120 meters above ground level referring to the terrain, for example, air traffic management, classification of aircraft, basic necessity and safety, and control area. This research will focus on confirming the theory through qualitative and quantitative research combined with theoretical modeling and regulatory framework and by gaining insights from various positions in aviation industries, including aviation experts, government officials, air traffic controllers, pilots, and airline operators to identify the critical essential components of low-altitude flight operation. This project analyzes by using computer programs for science and statistics research to prove that the result is equivalent to the theory and be beneficial for regulating the flight plan for low-altitude operation by different essential components from this project and can be further developed for future studies and research in aviation industries.Keywords: low-altitude aviation, UAV technology, flight plan, air traffic management, safety measures
Procedia PDF Downloads 671008 Rational Approach to the Design of a Sustainable Drainage System for Permanent Site of Federal Polytechnic Oko: A Case Study for Flood Mitigation and Environmental Management
Authors: Fortune Chibuike Onyia, Femi Ogundeji Ayodele
Abstract:
The design of a drainage system at the permanent site of Federal Polytechnic Oko in Anambra State is critical for mitigating flooding, managing surface runoff, and ensuring environmental sustainability. The design process employed a comprehensive analysis involving topographical surveys, hydraulic modeling, and the assessment of local soil types to ensure stability and efficient water conveyance. Proper slope gradients were considered to maintain adequate flow velocities and avoid sediment deposition, which could hinder long-term performance. From the result, the channel size estimated was 0.199m by 0.0199m and 0.0199m². This study proposed a channel size of 1.4m depth by 0.5m width and 0.7m², optimized to accommodate the anticipated peak flow resulting from heavy rainfall and storm-water events. This sizing is based on hydrological data, which takes into account rainfall intensity, runoff coefficients, and catchment area characteristics. The objective is to effectively convey storm-water while preventing overflow, erosion, and subsequent damage to infrastructure and properties. This sustainable approach incorporates provisions for maintenance and aligns with urban drainage standards to enhance durability and reliability. Implementing this drainage system will mitigate flood risks, safeguard campus facilities, improve overall water management, and contribute to the development of resilient infrastructure at Federal Polytechnic Oko.Keywords: flood mitigation, drainage system, sustainable design, environmental management
Procedia PDF Downloads 51007 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan
Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid
Abstract:
In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.Keywords: Data quality, Null hypothesis, Seismic lines, Seismic reflection survey
Procedia PDF Downloads 1621006 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis
Authors: Maher Ali Rusho, Sudipta Halder
Abstract:
The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.
Procedia PDF Downloads 101005 Characterization and Monitoring of the Yarn Faults Using Diametric Fault System
Authors: S. M. Ishtiaque, V. K. Yadav, S. D. Joshi, J. K. Chatterjee
Abstract:
The DIAMETRIC FAULTS system has been developed that captures a bi-directional image of yarn continuously in sequentially manner and provides the detailed classification of faults. A novel mathematical framework developed on the acquired bi-directional images forms the basis of fault classification in four broad categories, namely, Thick1, Thick2, Thin and Normal Yarn. A discretised version of Radon transformation has been used to convert the bi-directional images into one-dimensional signals. Images were divided into training and test sample sets. Karhunen–Loève Transformation (KLT) basis is computed for the signals from the images in training set for each fault class taking top six highest energy eigen vectors. The fault class of the test image is identified by taking the Euclidean distance of its signal from its projection on the KLT basis for each sample realization and fault class in the training set. Euclidean distance applied using various techniques is used for classifying an unknown fault class. An accuracy of about 90% is achieved in detecting the correct fault class using the various techniques. The four broad fault classes were further sub classified in four sub groups based on the user set boundary limits for fault length and fault volume. The fault cross-sectional area and the fault length defines the total volume of fault. A distinct distribution of faults is found in terms of their volume and physical dimensions which can be used for monitoring the yarn faults. It has been shown from the configurational based characterization and classification that the spun yarn faults arising out of mass variation, exhibit distinct characteristics in terms of their contours, sizes and shapes apart from their frequency of occurrences.Keywords: Euclidean distance, fault classification, KLT, Radon Transform
Procedia PDF Downloads 2631004 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete
Authors: Farzad Danaei, Yilmaz Akkaya
Abstract:
In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient
Procedia PDF Downloads 751003 Biophysical Study of the Interaction of Harmalol with Nucleic Acids of Different Motifs: Spectroscopic and Calorimetric Approaches
Authors: Kakali Bhadra
Abstract:
Binding of small molecules to DNA and recently to RNA, continues to attract considerable attention for developing effective therapeutic agents for control of gene expression. This work focuses towards understanding interaction of harmalol, a dihydro beta-carboline alkaloid, with different nucleic acid motifs viz. double stranded CT DNA, single stranded A-form poly(A), double-stranded A-form of poly(C)·poly(G) and clover leaf tRNAphe by different spectroscopic, calorimetric and molecular modeling techniques. Results of this study converge to suggest that (i) binding constant varied in the order of CT DNA > poly(C)·poly(G) > tRNAphe > poly(A), (ii) non-cooperative binding of harmalol to poly(C)·poly(G) and poly(A) and cooperative binding with CT DNA and tRNAphe, (iii) significant structural changes of CT DNA, poly(C)·poly(G) and tRNAphe with concomitant induction of optical activity in the bound achiral alkaloid molecules, while with poly(A) no intrinsic CD perturbation was observed, (iv) the binding was predominantly exothermic, enthalpy driven, entropy favoured with CT DNA and poly(C)·poly(G) while it was entropy driven with tRNAphe and poly(A), (v) a hydrophobic contribution and comparatively large role of non-polyelectrolytic forces to Gibbs energy changes with CT DNA, poly(C)·poly(G) and tRNAphe, and (vi) intercalated state of harmalol with CT DNA and poly(C)·poly(G) structure as revealed from molecular docking and supported by the viscometric data. Furthermore, with competition dialysis assay it was shown that harmalol prefers hetero GC sequences. All these findings unequivocally pointed out that harmalol prefers binding with ds CT DNA followed by ds poly(C)·poly(G), clover leaf tRNAphe and least with ss poly(A). The results highlight the importance of structural elements in these natural beta-carboline alkaloids in stabilizing different DNA and RNA of various motifs for developing nucleic acid based better therapeutic agents.Keywords: calorimetry, docking, DNA/RNA-alkaloid interaction, harmalol, spectroscopy
Procedia PDF Downloads 2271002 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm
Authors: Belgherbi Aicha, Bessaid Abdelhafid
Abstract:
In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm
Procedia PDF Downloads 3231001 Examining the Structural Model of Mindfulness and Headache Intensity With the Mediation of Resilience and Perfectionism in Migraine Patients
Authors: Alireza Monzavi Chaleshtari, Mahnaz Aliakbari Dehkordi, Nazila Esmaeili, Ahmad Alipour, Amin Asadi Hieh
Abstract:
Headache disorders are one of the most common disorders of the nervous system and are associated with suffering, disability, and financial costs for patients. Mindfulness as a lifestyle, in line with human nature, has the ability to affect the emotional system, i.e. thoughts, body sensations, raw emotions and action impulses of people. The aim of this study was to test the fit of structural model of mindfulness and severity of headache mediated by resilience and perfectionism in patients with migraine. Methods: The statistical population of this study included all patients with migraine referred to neurologists in Tehran in the spring and summer of 1401. The inclusion criteria were diagnosis of migraine by a neurologist, not having mental disorders or other physical diseases, and having at least a diploma. According to the number of research variables, 180 people were selected by convenience sampling method, which online answered the Ahvaz perfectionism questionnaire (AMQ), Connor and Davidson resilience questionnaire (CD-RISC), Ahvaz migraine headache questionnaire (APS) and 5-factor mindfulness questionnaire ((MAAS). Data were analyzed using structural equation modeling and Amos software. Results: The results showed that the direct pathways of mindfulness were not significant for severe headache (P <0.05), but other direct pathways - mindfulness to resilience, mindfulness to perfectionism, resilience to severe headache and perfectionism to severe headache), Was significant (P <0.01). After modifying and removing the non-significant paths, the final model fitted. Mediating variables Resilience and perfectionism mediated all paths of predictor variables to the criterion. Conclusion: According to the findings of the present study, mindfulness in migraine patients reduces the severity of headache by promoting resilience and reducing perfectionism.Keywords: migraine, headache severity, mindfulness, resilience, perfectionism
Procedia PDF Downloads 791000 Functional Connectivity Signatures of Polygenic Depression Risk in Youth
Authors: Louise Moles, Steve Riley, Sarah D. Lichenstein, Marzieh Babaeianjelodar, Robert Kohler, Annie Cheng, Corey Horien Abigail Greene, Wenjing Luo, Jonathan Ahern, Bohan Xu, Yize Zhao, Chun Chieh Fan, R. Todd Constable, Sarah W. Yip
Abstract:
Background: Risks for depression are myriad and include both genetic and brain-based factors. However, relationships between these systems are poorly understood, limiting understanding of disease etiology, particularly at the developmental level. Methods: We use a data-driven machine learning approach connectome-based predictive modeling (CPM) to identify functional connectivity signatures associated with polygenic risk scores for depression (DEP-PRS) among youth from the Adolescent Brain and Cognitive Development (ABCD) study across diverse brain states, i.e., during resting state, during affective working memory, during response inhibition, during reward processing. Results: Using 10-fold cross-validation with 100 iterations and permutation testing, CPM identified connectivity signatures of DEP-PRS across all examined brain states (rho’s=0.20-0.27, p’s<.001). Across brain states, DEP-PRS was positively predicted by increased connectivity between frontoparietal and salience networks, increased motor-sensory network connectivity, decreased salience to subcortical connectivity, and decreased subcortical to motor-sensory connectivity. Subsampling analyses demonstrated that model accuracies were robust across random subsamples of N’s=1,000, N’s=500, and N’s=250 but became unstable at N’s=100. Conclusions: These data, for the first time, identify neural networks of polygenic depression risk in a large sample of youth before the onset of significant clinical impairment. Identified networks may be considered potential treatment targets or vulnerability markers for depression risk.Keywords: genetics, functional connectivity, pre-adolescents, depression
Procedia PDF Downloads 57999 Optimizing University Administration in a Globalized World: Leveraging AI and ICT for Enhanced Governance and Sustainability in Higher Education
Authors: Ikechukwu Ogeze Ukeje, Chinyere Ori Elom, Chukwudum Collins Umoke
Abstract:
This study explores the challenges in the integration of Artificial Intelligence (AI) and Information and Communication Technology (ICT) practices in enhancing governance and sustainable solution modeling in higher education, focusing on Alex Ekwueme Federal University Ndufu-Alike (AE-FUNAI), Nigeria. In the context of a developing country like Nigeria, leveraging AI and ICT tools presents a unique opportunity to improve teaching, learning, administrative processes, and governance. The research aims to evaluate how AI and ICT technologies can contribute to sustainable educational practices, enhance decision-making processes, and improve engagement among key stakeholders: students, lecturers, and administrative staff. Students are involved to provide insights into their interactions with AI and ICT tools, particularly in learning and participation in governance. Lecturers’ perspectives will offer a view into how these technologies influence teaching, research, and curriculum development. Administrative staff will provide a crucial understanding of how AI and ICT tools can streamline operations, support data-driven governance, and enhance institutional efficiency. This study will use a mixed-method approach to collect both qualitative and quantitative data. The finding of this study is geared towards shaping the future of education in Nigeria and beyond by developing an Inclusive AI-governance Integration Framework (I-AIGiF) for enhanced performance in the system. Examining the roles of these stakeholder groups, this research could guide the development of policies for more effective AI and ICT integration, leading to sustainable educational innovation and governance.Keywords: university administration, AI, higher education governance, education sustainability, ICT challenges
Procedia PDF Downloads 16