Search results for: musculoskeletal modeling
840 Analysis of the Detachment of Water Droplets from a Porous Fibrous Surface
Authors: Ibrahim Rassoul, E-K. Si Ahmed
Abstract:
The growth, deformation, and detachment of fluid droplets adherent to solid substrates is a problem of fundamental interest with numerous practical applications. Specific interest in this proposal is the problem of a droplet on a fibrous, hydrophobic substrate subjected to body or external forces (gravity, convection). The past decade has seen tremendous advances in proton exchange membrane fuel cell (PEMFC) technology. However, there remain many challenges to bring commercially viable stationary PEMFC products to the market. PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On the one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause 'flooding' (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The aim of this work is to investigate the stability of a liquid water droplet emerging form a GDL pore, to gain fundamental insight into the instability process leading to detachment. The approach will combine analytical and numerical modeling with experimental visualization and measurements.Keywords: polymer electrolyte fuel cell, water droplet, gas diffusion layer, contact angle, surface tension
Procedia PDF Downloads 254839 Ontology based Fault Detection and Diagnosis system Querying and Reasoning examples
Authors: Marko Batic, Nikola Tomasevic, Sanja Vranes
Abstract:
One of the strongholds in the ubiquitous efforts related to the energy conservation and energy efficiency improvement is represented by the retrofit of high energy consumers in buildings. In general, HVAC systems represent the highest energy consumers in buildings. However they usually suffer from mal-operation and/or malfunction, causing even higher energy consumption than necessary. Various Fault Detection and Diagnosis (FDD) systems can be successfully employed for this purpose, especially when it comes to the application at a single device/unit level. In the case of more complex systems, where multiple devices are operating in the context of the same building, significant energy efficiency improvements can only be achieved through application of comprehensive FDD systems relying on additional higher level knowledge, such as their geographical location, served area, their intra- and inter- system dependencies etc. This paper presents a comprehensive FDD system that relies on the utilization of common knowledge repository that stores all critical information. The discussed system is deployed as a test-bed platform at the two at Fiumicino and Malpensa airports in Italy. This paper aims at presenting advantages of implementation of the knowledge base through the utilization of ontology and offers improved functionalities of such system through examples of typical queries and reasoning that enable derivation of high level energy conservation measures (ECM). Therefore, key SPARQL queries and SWRL rules, based on the two instantiated airport ontologies, are elaborated. The detection of high level irregularities in the operation of airport heating/cooling plants is discussed and estimation of energy savings is reported.Keywords: airport ontology, knowledge management, ontology modeling, reasoning
Procedia PDF Downloads 540838 Collapse Load Analysis of Reinforced Concrete Pile Group in Liquefying Soils under Lateral Loading
Authors: Pavan K. Emani, Shashank Kothari, V. S. Phanikanth
Abstract:
The ultimate load analysis of RC pile groups has assumed a lot of significance under liquefying soil conditions, especially due to post-earthquake studies of 1964 Niigata, 1995 Kobe and 2001 Bhuj earthquakes. The present study reports the results of numerical simulations on pile groups subjected to monotonically increasing lateral loads under design amounts of pile axial loading. The soil liquefaction has been considered through the non-linear p-y relationship of the soil springs, which can vary along the depth/length of the pile. This variation again is related to the liquefaction potential of the site and the magnitude of the seismic shaking. As the piles in the group can reach their extreme deflections and rotations during increased amounts of lateral loading, a precise modeling of the inelastic behavior of the pile cross-section is done, considering the complete stress-strain behavior of concrete, with and without confinement, and reinforcing steel, including the strain-hardening portion. The possibility of the inelastic buckling of the individual piles is considered in the overall collapse modes. The model is analysed using Riks analysis in finite element software to check the post buckling behavior and plastic collapse of piles. The results confirm the kinds of failure modes predicted by centrifuge test results reported by researchers on pile group, although the pile material used is significantly different from that of the simulation model. The extension of the present work promises an important contribution to the design codes for pile groups in liquefying soils.Keywords: collapse load analysis, inelastic buckling, liquefaction, pile group
Procedia PDF Downloads 164837 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: customer value, Huff's Gravity Model, POS, Retailer
Procedia PDF Downloads 125836 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches
Authors: Vahid Nourani, Atefeh Ashrafi
Abstract:
Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant
Procedia PDF Downloads 131835 VR in the Middle School Classroom-An Experimental Study on Spatial Relations and Immersive Virtual Reality
Authors: Danielle Schneider, Ying Xie
Abstract:
Middle school science, technology, engineering, and math (STEM) teachers experience an exceptional challenge in the expectation to incorporate curricula that builds strong spatial reasoning skills on rudimentary geometry concepts. Because spatial ability is so closely tied to STEM students’ success, researchers are tasked to determine effective instructional practices that create an authentic learning environment within the immersive virtual reality learning environment (IVRLE). This study looked to investigate the effect of the IVRLE on middle school STEM students’ spatial reasoning skills as a methodology to benefit the STEM middle school students’ spatial reasoning skills. This experimental study was comprised of thirty 7th-grade STEM students divided into a treatment group that was engaged in an immersive VR platform where they engaged in building an object in the virtual realm by applying spatial processing and visualizing its dimensions and a control group that built the identical object using a desktop computer-based, computer-aided design (CAD) program. Before and after the students participated in the respective “3D modeling” environment, their spatial reasoning abilities were assessed using the Middle Grades Mathematics Project Spatial Visualization Test (MGMP-SVT). Additionally, both groups created a physical 3D model as a secondary measure to measure the effectiveness of the IVRLE. The results of a one-way ANOVA in this study identified a negative effect on those in the IVRLE. These findings suggest that with middle school students, virtual reality (VR) proved an inadequate tool to benefit spatial relation skills as compared to desktop-based CAD.Keywords: virtual reality, spatial reasoning, CAD, middle school STEM
Procedia PDF Downloads 88834 The Hidden Role of Interest Rate Risks in Carry Trades
Authors: Jingwen Shi, Qi Wu
Abstract:
We study the role played interest rate risk in carry trade return in order to understand the forward premium puzzle. In this study, our goal is to investigate to what extent carry trade return is indeed due to compensation for risk taking and, more important, to reveal the nature of these risks. Using option data not only on exchange rates but also on interest rate swaps (swaptions), our first finding is that, besides the consensus currency risks, interest rate risks also contribute a non-negligible portion to the carry trade return. What strikes us is our second finding. We find that large downside risks of future exchange rate movements are, in fact, priced significantly in option market on interest rates. The role played by interest rate risk differs structurally from the currency risk. There is a unique premium associated with interest rate risk, though seemingly small in size, which compensates the tail risks, the left tail to be precise. On the technical front, our study relies on accurately retrieving implied distributions from currency options and interest rate swaptions simultaneously, especially the tail components of the two. For this purpose, our major modeling work is to build a new international asset pricing model where we use an orthogonal setup for pricing kernels and specify non-Gaussian dynamics in order to capture three sets of option skew accurately and consistently across currency options and interest rate swaptions, domestic and foreign, within one model. Our results open a door for studying forward premium anomaly through implied information from interest rate derivative market.Keywords: carry trade, forward premium anomaly, FX option, interest rate swaption, implied volatility skew, uncovered interest rate parity
Procedia PDF Downloads 446833 Exploring Hydrogen Embrittlement and Fatigue Crack Growth in API 5L X52 Steel Pipeline Under Cyclic Internal Pressure
Authors: Omar Bouledroua, Djamel Zelmati, Zahreddine Hafsi, Milos B. Djukic
Abstract:
Transporting hydrogen gas through the existing natural gas pipeline network offers an efficient solution for energy storage and conveyance. Hydrogen generated from excess renewable electricity can be conveyed through the API 5L steel-made pipelines that already exist. In recent years, there has been a growing demand for the transportation of hydrogen through existing gas pipelines. Therefore, numerical and experimental tests are required to verify and ensure the mechanical integrity of the API 5L steel pipelines that will be used for pressurized hydrogen transportation. Internal pressure loading is likely to accelerate hydrogen diffusion through the internal pipe wall and consequently accentuate the hydrogen embrittlement of steel pipelines. Furthermore, pre-cracked pipelines are susceptible to quick failure, mainly under a time-dependent cyclic pressure loading that drives fatigue crack propagation. Meanwhile, after several loading cycles, the initial cracks will propagate to a critical size. At this point, the remaining service life of the pipeline can be estimated, and inspection intervals can be determined. This paper focuses on the hydrogen embrittlement of API 5L steel-made pipeline under cyclic pressure loading. Pressurized hydrogen gas is transported through a network of pipelines where demands at consumption nodes vary periodically. The resulting pressure profile over time is considered a cyclic loading on the internal wall of a pre-cracked pipeline made of API 5L steel-grade material. Numerical modeling has allowed the prediction of fatigue crack evolution and estimation of the remaining service life of the pipeline. The developed methodology in this paper is based on the ASME B31.12 standard, which outlines the guidelines for hydrogen pipelines.Keywords: hydrogen embrittlement, pipelines, transient flow, cyclic pressure, fatigue crack growth
Procedia PDF Downloads 93832 Evaluation of Effectiveness of Three Common Equine Thrush Treatments
Authors: A. S. Strait, J. A. Bryk-Lucy, L. M. Ritchie
Abstract:
Thrush is a common disease of ungulates primarily affecting the frog and sulci, caused by the anaerobic bacteria Fusobacterium necrophorum. Thrush accounts for approximately 45.0% of hoof disorders in horses. Prevention and treatment of thrush are essential to prevent horses from developing severe infections and becoming lame. Proper knowledge of hoof care and thrush treatments is crucial to avoid financial costs, unsoundness and lost training time. Research on the effectiveness of numerous commercial and homemade thrush treatments is limited in the equine industry. The objective of this study was to compare the effectiveness of three common thrush treatments for horses: weekly application of Thrush Buster, daily dilute bleach solution spray, or Metronidazole pastes every other day. Cases of thrush diagnosed by a veterinarian or veterinarian-trained researcher were given a score, from 0 to 4, based on the severity of the thrush in each hoof (n=59) and randomly assigned a treatment. Cases were rescored each week of the three-week treatment, and the final and initial scores were compared to determine effectiveness. The thrush treatments were compared with Thrush Buster as the reference at a significance level of α=.05. Binomial Logistic Regression Modeling was performed, finding that the odds of a hoof treated with Metronidazole to be thrush-free was 6.1 times greater than a hoof treated with Thrush Buster (p=0.001), while the odds of a hoof that was treated with bleach to be thrush-free was only 0.97 times greater than a hoof treated with Thrush Buster (p=0.970), after adjustment for treatment week. Of the three treatments utilized in this study, Metronidazole paste applied to the affected areas every other day was the most effective treatment for thrush in horses. There are many other thrush remedies available, and further research is warranted to determine the efficacy of additional treatment options.Keywords: fusobacterium necrophorum, thrush, equine, horse, lameness
Procedia PDF Downloads 161831 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes
Authors: V. Churkin, M. Lopatin
Abstract:
The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second –95,3%.Keywords: bass model, generalized bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States
Procedia PDF Downloads 348830 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue
Authors: Rachel Y. Zhang, Christopher K. Anderson
Abstract:
A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine
Procedia PDF Downloads 135829 Spatiotemporal Modeling of Under-Five Mortality and Associated Risk Factors in Ethiopia
Authors: Melkamu A. Zeru, Aweke A. Mitiku, Endashaw Amuka
Abstract:
Background: Under-five mortality is the likelihood that a baby will pass away before turning exactly 5 years old, represented as a percentage per 1,000 live births. Exploring the spatial distribution and identifying the temporal pattern is important to reducing under-five child mortality globally, including in Ethiopia. Thus, this study aimed to identify the risk factors of under-five mortality and the spatiotemporal variation in Ethiopian administrative zones. Method: This study used the 2000-2016 Ethiopian Demographic and Health Survey (EDHS) data, which were collected using a two-stage sampling method. A total of 43,029 (10,873 in 2000, 9,861 in 2005, 11,654 in 2011, and 10,641 in 2016) weighted sample under-five child mortality was used. The space-time dynamic model was employed to account for spatial and time effects in 65 administrative zones in Ethiopia. Results: From the result of a general nesting spatial-temporal dynamic model, there was a significant space-time interaction effect [γ = -0.1444, 95 % CI (-0.6680, -0.1355)] for under-five mortality. The increase in the percentages of mothers illiteracy [𝛽 = 0.4501, 95% CI (0.2442, 0.6559)], not vaccinated[𝛽= 0.7681, 95% CI (0.5683, 0.9678)], unimproved water[𝛽= 0.5801, CI (0.3793, 0.7808)] were increased death rates for under five children while increased percentage of contraceptive use [𝛽= -0.6609, 95% CI (-0.8636, -0.4582)] and ANC visit > 4 times [𝛽= -0.1585, 95% CI(-0.1812, -0.1357)] were contributed to the decreased under-five mortality rate at the zone in Ethiopia. Conclusions: Even though the mortality rate for children under five has decreased over time, still there is still higher in different zones of Ethiopia. There exists spatial and temporal variation in under-five mortality among zones. Therefore, it is very important to consider spatial neighbourhoods and temporal context when aiming to avoid under-five mortality.Keywords: under-five children mortality, space-time dynamic, spatiotemporal, Ethiopia
Procedia PDF Downloads 40828 Cognitive Science Based Scheduling in Grid Environment
Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya
Abstract:
Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence
Procedia PDF Downloads 395827 Comparison of E-learning and Face-to-Face Learning Models Through the Early Design Stage in Architectural Design Education
Authors: Gülay Dalgıç, Gildis Tachir
Abstract:
Architectural design studios are ambiencein where architecture design is realized as a palpable product in architectural education. In the design studios that the architect candidate will use in the design processthe information, the methods of approaching the design problem, the solution proposals, etc., are set uptogetherwith the studio coordinators. The architectural design process, on the other hand, is complex and uncertain.Candidate architects work in a process that starts with abstre and ill-defined problems. This process starts with the generation of alternative solutions with the help of representation tools, continues with the selection of the appropriate/satisfactory solution from these alternatives, and then ends with the creation of an acceptable design/result product. In the studio ambience, many designs and thought relationships are evaluated, the most important step is the early design phase. In the early design phase, the first steps of converting the information are taken, and converted information is used in the constitution of the first design decisions. This phase, which positively affects the progress of the design process and constitution of the final product, is complex and fuzzy than the other phases of the design process. In this context, the aim of the study is to investigate the effects of face-to-face learning model and e-learning model on the early design phase. In the study, the early design phase was defined by literature research. The data of the defined early design phase criteria were obtained with the feedback graphics created for the architect candidates who performed e-learning in the first year of architectural education and continued their education with the face-to-face learning model. The findings of the data were analyzed with the common graphics program. It is thought that this research will contribute to the establishment of a contemporary architectural design education model by reflecting the evaluation of the data and results on architectural education.Keywords: education modeling, architecture education, design education, design process
Procedia PDF Downloads 139826 Human Resource Information System: Role in HRM Practices and Organizational Performance
Authors: Ejaz Ali M. Phil
Abstract:
Enterprise Resource Planning (ERP) systems are playing a vital role in effective management of business functions in large and complex organizations. Human Resource Information System (HRIS) is a core module of ERP, providing concrete solutions to implement Human Resource Management (HRM) Practices in an innovative and efficient manner. Over the last decade, there has been considerable increase in the studies on HRIS. Nevertheless, previous studies relatively lacked to examine the moderating role of HRIS in performing HRM practices that may affect the firms’ performance. The current study was carried out to examine the impact of HRM practices (training, performance appraisal) on perceived organizational performance, with moderating role of HRIS, where the system is in place. The study based on Resource Based View (RBV) and Ability Motivation Opportunity (AMO) Theories, advocating that strengthening of human capital enables an organization to achieve and sustain competitive advantage which leads to improved organizational performance. Data were collected through structured questionnaire based upon adopted instruments after establishing reliability and validity. The structural equation modeling (SEM) were used to assess the model fitness, hypotheses testing and to establish validity of the instruments through Confirmatory Factor Analysis (CFA). A total 220 employees of 25 firms in corporate sector were sampled through non-probability sampling technique. Path analysis revealing that HRM practices and HRIS have significant positive impact on organizational performance. The results further showed that the HRIS moderated the relationships between training, performance appraisal and organizational performance. The interpretation of the findings and limitations, theoretical and managerial implications are discussed.Keywords: enterprise resource planning, human resource, information system, human capital
Procedia PDF Downloads 397825 Bridge Members Segmentation Algorithm of Terrestrial Laser Scanner Point Clouds Using Fuzzy Clustering Method
Authors: Donghwan Lee, Gichun Cha, Jooyoung Park, Junkyeong Kim, Seunghee Park
Abstract:
3D shape models of the existing structure are required for many purposes such as safety and operation management. The traditional 3D modeling methods are based on manual or semi-automatic reconstruction from close-range images. It occasions great expense and time consuming. The Terrestrial Laser Scanner (TLS) is a common survey technique to measure quickly and accurately a 3D shape model. This TLS is used to a construction site and cultural heritage management. However there are many limits to process a TLS point cloud, because the raw point cloud is massive volume data. So the capability of carrying out useful analyses is also limited with unstructured 3-D point. Thus, segmentation becomes an essential step whenever grouping of points with common attributes is required. In this paper, members segmentation algorithm was presented to separate a raw point cloud which includes only 3D coordinates. This paper presents a clustering approach based on a fuzzy method for this objective. The Fuzzy C-Means (FCM) is reviewed and used in combination with a similarity-driven cluster merging method. It is applied to the point cloud acquired with Lecia Scan Station C10/C5 at the test bed. The test-bed was a bridge which connects between 1st and 2nd engineering building in Sungkyunkwan University in Korea. It is about 32m long and 2m wide. This bridge was used as pedestrian between two buildings. The 3D point cloud of the test-bed was constructed by a measurement of the TLS. This data was divided by segmentation algorithm for each member. Experimental analyses of the results from the proposed unsupervised segmentation process are shown to be promising. It can be processed to manage configuration each member, because of the segmentation process of point cloud.Keywords: fuzzy c-means (FCM), point cloud, segmentation, terrestrial laser scanner (TLS)
Procedia PDF Downloads 238824 The Antecedents of Internet Addiction toward Smartphone Usage
Authors: Pui-Lai To, Chechen Liao, Hen-Yi Huang
Abstract:
Twenty years after Internet development, scholars have started to identify the negative impacts brought by the Internet. Overuse of Internet could develop Internet dependency and in turn cause addiction behavior. Therefore understanding the phenomenon of Internet addiction is important. With the joint efforts of experts and scholars, Internet addiction has been officially listed as a symptom that affects public health, and the diagnosis, causes and treatment of the symptom have also been explored. On the other hand, in the area of smartphone Internet usage, most studies are still focusing on the motivation factors of smartphone usage. Not much research has been done on smartphone Internet addiction. In view of the increasing adoption of smartphones, this paper is intended to find out whether smartphone Internet addiction exists in modern society or not. This study adopted the research methodology of online survey targeting users with smartphone Internet experience. A total of 434 effective samples were recovered. In terms of data analysis, Partial Least Square (PLS) in Structural Equation Modeling (SEM) is used for sample analysis and research model testing. Software chosen for statistical analysis is SPSS 20.0 for windows and SmartPLS 2.0. The research result successfully proved that smartphone users who access Internet service via smartphone could also develop smartphone Internet addiction. Factors including flow experience, depression, virtual social support, smartphone Internet affinity and maladaptive cognition all have significant and positive influence on smartphone Internet addiction. In the scenario of smartphone Internet use, descriptive norm has a positive and significant influence on perceived playfulness, while perceived playfulness also has a significant and positive influence on flow experience. Depression, on the other hand, is negatively influenced by actual social support and positive influenced by the virtual social support.Keywords: internet addiction, smartphone usage, social support, perceived playfulness
Procedia PDF Downloads 247823 Comparative Study of Flood Plain Protection Zone Determination Methodologies in Colombia, Spain and Canada
Authors: P. Chang, C. Lopez, C. Burbano
Abstract:
Flood protection zones are riparian buffers that are formed to manage and mitigate the impact of flooding, and in turn, protect local populations. The purpose of this study was to evaluate the Guía Técnica de Criterios para el Acotamiento de las Rondas Hídricas in Colombia against international regulations in Canada and Spain, in order to determine its limitations and contribute to its improvement. The need to establish a specific corridor that allows for the dynamic development of a river is clear; however, limitations present in the Colombian Technical Guide are identified. The study shows that international regulations provide similar concepts as used in Colombia, but additionally integrate aspects such as regionalization that allows for a better characterization of the channel way, and incorporate the frequency of flooding and its probability of occurrence in the concept of risk when determining the protection zone. The case study analyzed in Dosquebradas - Risaralda aimed at comparing the application of the different standards through hydraulic modeling. It highlights that the current Colombian standard does not offer sufficient details in its implementation phase, which leads to a false sense of security related to inaccuracy and lack of data. Furthermore, the study demonstrates how the Colombian norm is ill-adapted to the conditions of Dosquebradas typical of the Andes region, both in the social and hydraulic aspects, and does not reduce the risk, nor does it improve the protection of the population. Our study considers it pertinent to include risk estimation as an integral part of the methodology when establishing protect flood zone, considering the particularity of water systems, as they are characterized by an heterogeneous natural dynamic behavior.Keywords: environmental corridor, flood zone determination, hydraulic domain, legislation flood protection zone
Procedia PDF Downloads 114822 Numerical Modeling of Air Shock Wave Generated by Explosive Detonation and Dynamic Response of Structures
Authors: Michał Lidner, Zbigniew SzcześNiak
Abstract:
The ability to estimate blast load overpressure properly plays an important role in safety design of buildings. The issue of studying of blast loading on structural elements has been explored for many years. However, in many literature reports shock wave overpressure is estimated with simplified triangular or exponential distribution in time. This indicates some errors when comparing real and numerical reaction of elements. Nonetheless, it is possible to further improve setting similar to the real blast load overpressure function versus time. The paper presents a method of numerical analysis of the phenomenon of the air shock wave propagation. It uses Finite Volume Method and takes into account energy losses due to a heat transfer with respect to an adiabatic process rule. A system of three equations (conservation of mass, momentum and energy) describes the flow of a volume of gaseous medium in the area remote from building compartments, which can inhibit the movement of gas. For validation three cases of a shock wave flow were analyzed: a free field explosion, an explosion inside a steel insusceptible tube (the 1D case) and an explosion inside insusceptible cube (the 3D case). The results of numerical analysis were compared with the literature reports. Values of impulse, pressure, and its duration were studied. Finally, an overall good convergence of numerical results with experiments was achieved. Also the most important parameters were well reflected. Additionally analyses of dynamic response of one of considered structural element were made.Keywords: adiabatic process, air shock wave, explosive, finite volume method
Procedia PDF Downloads 194821 Numerical Modeling of Film Cooling of the Surface at Non-Uniform Heat Flux Distributions on the Wall
Authors: M. V. Bartashevich
Abstract:
The problem of heat transfer at thin laminar liquid film is solved numerically. A thin film of liquid flows down an inclined surface under conditions of variable heat flux on the wall. The use of thin films of liquid allows to create the effective technologies for cooling surfaces. However, it is important to investigate the most suitable cooling regimes from a safety point of view, in order, for example, to avoid overheating caused by the ruptures of the liquid film, and also to study the most effective cooling regimes depending on the character of the distribution of the heat flux on the wall, as well as the character of the blowing of the film surface, i.e., the external shear stress on its surface. In the statement of the problem on the film surface, the heat transfer coefficient between the liquid and gas is set, as well as a variable external shear stress - the intensity of blowing. It is shown that the combination of these factors - the degree of uniformity of the distribution of heat flux on the wall and the intensity of blowing, affects the efficiency of heat transfer. In this case, with an increase in the intensity of blowing, the cooling efficiency increases, reaching a maximum, and then decreases. It is also shown that the more uniform the heating of the wall, the more efficient the heat sink. A separate study was made for the flow regime along the horizontal surface when the liquid film moves solely due to external stress influence. For this mode, the analytical solution is used for the temperature at the entrance region for further numerical calculations downstream. Also the influence of the degree of uniformity of the heat flux distribution on the wall and the intensity of blowing of the film surface on the heat transfer efficiency was also studied. This work was carried out at the Kutateladze Institute of Thermophysics SB RAS (Russia) and supported by FASO Russia.Keywords: Heat Flux, Heat Transfer Enhancement, External Blowing, Thin Liquid Film
Procedia PDF Downloads 151820 Geometrical Analysis of an Atheroma Plaque in Left Anterior Descending Coronary Artery
Authors: Sohrab Jafarpour, Hamed Farokhi, Mohammad Rahmati, Alireza Gholipour
Abstract:
In the current study, a nonlinear fluid-structure interaction (FSI) biomechanical model of atherosclerosis in the left anterior descending (LAD) coronary artery is developed to perform a detailed sensitivity analysis of the geometrical features of an atheroma plaque. In the development of the numerical model, first, a 3D geometry of the diseased artery is developed based on patient-specific dimensions obtained from the experimental studies. The geometry includes four influential geometric characteristics: stenosis ratio, plaque shoulder-length, fibrous cap thickness, and eccentricity intensity. Then, a suitable strain energy density function (SEDF) is proposed based on the detailed material stability analysis to accurately model the hyperelasticity of the arterial walls. The time-varying inlet velocity and outlet pressure profiles are adopted from experimental measurements to incorporate the pulsatile nature of the blood flow. In addition, a computationally efficient type of structural boundary condition is imposed on the arterial walls. Finally, a non-Newtonian viscosity model is implemented to model the shear-thinning behaviour of the blood flow. According to the results, the structural responses in terms of the maximum principal stress (MPS) are affected more compared to the fluid responses in terms of wall shear stress (WSS) as the geometrical characteristics are varying. The extent of these changes is critical in the vulnerability assessment of an atheroma plaque.Keywords: atherosclerosis, fluid-Structure interaction modeling, material stability analysis, and nonlinear biomechanics
Procedia PDF Downloads 90819 Antecedents of Regret and Satisfaction in Electronic Commerce
Authors: Chechen Liao, Pui-Lai To, Chuang-Chun Liu
Abstract:
Online shopping has become very popular recently. In today’s highly competitive online retail environment, retaining existing customers is a necessity for online retailers. This study focuses on the antecedents and consequences of Internet buyer regret and satisfaction in the online consumer purchasing process. This study examines the roles that online consumer’s purchasing process evaluations (i.e., search experience difficulty, service-attribute evaluations, product-attribute evaluations and post-purchase price perceptions) and alternative evaluation (i.e., alternative attractiveness) play in determining buyer regret and satisfaction in e-commerce. The study also examines the consequences of regret, satisfaction and habit in regard to repurchase intention. In addition, this study attempts to investigate the moderating role of habit in attaining a better understanding of the relationship between repurchase intention and its antecedents. Survey data collected from 431 online customers are analyzed using structural equation modeling (SEM) with partial least squares (PLS) and support provided for the hypothesized links. These results indicate that online consumer’s purchasing process evaluations (i.e., search experience difficulty, service-attribute evaluations, product-attribute evaluations and post-purchase price perceptions) have significant influences on regret and satisfaction, which in turn influences repurchase intention. In addition, alternative evaluation (i.e., alternative attractiveness) has a significant positive influence on regret. The research model can provide a richer understanding of online customers’ repurchase behavior and contribute to both research and practice.Keywords: online shopping, purchase evaluation, regret, satisfaction
Procedia PDF Downloads 284818 An Analysis of Different Essential Components of Flight Plan Operations at Low Altitude
Authors: Apisit Nawapanpong, Natthapat Boonjerm
Abstract:
This project aims to analyze and identify the flight plan of low-altitude aviation in Thailand and other countries. The development of UAV technology has led the innovation and revolution in the aviation industry; this includes the development of new modes of passenger or freight transportation, and it has also affected other industries widely. At present, this technology is being developed rapidly and has been tested all over the world to make the most efficient for technology or innovation, and it is likely to grow more extensively. However, no flight plan for low-altitude operation has been published by the government organization; when compared with high-altitude aviation with manned aircraft, various unique factors are different, whether mission, operation, altitude range or airspace restrictions. In the study of the essential components of low-altitude operation measures to be practical and tangible, there were major problems, so the main consideration of this project is to analyze the components of low-altitude operations which are conducted up to the altitudes of 400 ft or 120 meters above ground level referring to the terrain, for example, air traffic management, classification of aircraft, basic necessity and safety, and control area. This research will focus on confirming the theory through qualitative and quantitative research combined with theoretical modeling and regulatory framework and by gaining insights from various positions in aviation industries, including aviation experts, government officials, air traffic controllers, pilots, and airline operators to identify the critical essential components of low-altitude flight operation. This project analyzes by using computer programs for science and statistics research to prove that the result is equivalent to the theory and be beneficial for regulating the flight plan for low-altitude operation by different essential components from this project and can be further developed for future studies and research in aviation industries.Keywords: low-altitude aviation, UAV technology, flight plan, air traffic management, safety measures
Procedia PDF Downloads 77817 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models
Authors: Azadeh Jafari, Robert G. Owens
Abstract:
In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics
Procedia PDF Downloads 363816 Rational Approach to the Design of a Sustainable Drainage System for Permanent Site of Federal Polytechnic Oko: A Case Study for Flood Mitigation and Environmental Management
Authors: Fortune Chibuike Onyia, Femi Ogundeji Ayodele
Abstract:
The design of a drainage system at the permanent site of Federal Polytechnic Oko in Anambra State is critical for mitigating flooding, managing surface runoff, and ensuring environmental sustainability. The design process employed a comprehensive analysis involving topographical surveys, hydraulic modeling, and the assessment of local soil types to ensure stability and efficient water conveyance. Proper slope gradients were considered to maintain adequate flow velocities and avoid sediment deposition, which could hinder long-term performance. From the result, the channel size estimated was 0.199m by 0.0199m and 0.0199m². This study proposed a channel size of 1.4m depth by 0.5m width and 0.7m², optimized to accommodate the anticipated peak flow resulting from heavy rainfall and storm-water events. This sizing is based on hydrological data, which takes into account rainfall intensity, runoff coefficients, and catchment area characteristics. The objective is to effectively convey storm-water while preventing overflow, erosion, and subsequent damage to infrastructure and properties. This sustainable approach incorporates provisions for maintenance and aligns with urban drainage standards to enhance durability and reliability. Implementing this drainage system will mitigate flood risks, safeguard campus facilities, improve overall water management, and contribute to the development of resilient infrastructure at Federal Polytechnic Oko.Keywords: flood mitigation, drainage system, sustainable design, environmental management
Procedia PDF Downloads 13815 Patient-Specific Design Optimization of Cardiovascular Grafts
Authors: Pegah Ebrahimi, Farshad Oveissi, Iman Manavi-Tehrani, Sina Naficy, David F. Fletcher, Fariba Dehghani, David S. Winlaw
Abstract:
Despite advances in modern surgery, congenital heart disease remains a medical challenge and a major cause of infant mortality. Cardiovascular prostheses are routinely used in surgical procedures to address congenital malformations, for example establishing a pathway from the right ventricle to the pulmonary arteries in pulmonary valvar atresia. Current off-the-shelf options including human and adult products have limited biocompatibility and durability, and their fixed size necessitates multiple subsequent operations to upsize the conduit to match with patients’ growth over their lifetime. Non-physiological blood flow is another major problem, reducing the longevity of these prostheses. These limitations call for better designs that take into account the hemodynamical and anatomical characteristics of different patients. We have integrated tissue engineering techniques with modern medical imaging and image processing tools along with mathematical modeling to optimize the design of cardiovascular grafts in a patient-specific manner. Computational Fluid Dynamics (CFD) analysis is done according to models constructed from each individual patient’s data. This allows for improved geometrical design and achieving better hemodynamic performance. Tissue engineering strives to provide a material that grows with the patient and mimic the durability and elasticity of the native tissue. Simulations also give insight on the performance of the tissues produced in our lab and reduce the need for costly and time-consuming methods of evaluation of the grafts. We are also developing a methodology for the fabrication of the optimized designs.Keywords: computational fluid dynamics, cardiovascular grafts, design optimization, tissue engineering
Procedia PDF Downloads 246814 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan
Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid
Abstract:
In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.Keywords: Data quality, Null hypothesis, Seismic lines, Seismic reflection survey
Procedia PDF Downloads 166813 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis
Authors: Maher Ali Rusho, Sudipta Halder
Abstract:
The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.
Procedia PDF Downloads 17812 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete
Authors: Farzad Danaei, Yilmaz Akkaya
Abstract:
In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient
Procedia PDF Downloads 81811 Biophysical Study of the Interaction of Harmalol with Nucleic Acids of Different Motifs: Spectroscopic and Calorimetric Approaches
Authors: Kakali Bhadra
Abstract:
Binding of small molecules to DNA and recently to RNA, continues to attract considerable attention for developing effective therapeutic agents for control of gene expression. This work focuses towards understanding interaction of harmalol, a dihydro beta-carboline alkaloid, with different nucleic acid motifs viz. double stranded CT DNA, single stranded A-form poly(A), double-stranded A-form of poly(C)·poly(G) and clover leaf tRNAphe by different spectroscopic, calorimetric and molecular modeling techniques. Results of this study converge to suggest that (i) binding constant varied in the order of CT DNA > poly(C)·poly(G) > tRNAphe > poly(A), (ii) non-cooperative binding of harmalol to poly(C)·poly(G) and poly(A) and cooperative binding with CT DNA and tRNAphe, (iii) significant structural changes of CT DNA, poly(C)·poly(G) and tRNAphe with concomitant induction of optical activity in the bound achiral alkaloid molecules, while with poly(A) no intrinsic CD perturbation was observed, (iv) the binding was predominantly exothermic, enthalpy driven, entropy favoured with CT DNA and poly(C)·poly(G) while it was entropy driven with tRNAphe and poly(A), (v) a hydrophobic contribution and comparatively large role of non-polyelectrolytic forces to Gibbs energy changes with CT DNA, poly(C)·poly(G) and tRNAphe, and (vi) intercalated state of harmalol with CT DNA and poly(C)·poly(G) structure as revealed from molecular docking and supported by the viscometric data. Furthermore, with competition dialysis assay it was shown that harmalol prefers hetero GC sequences. All these findings unequivocally pointed out that harmalol prefers binding with ds CT DNA followed by ds poly(C)·poly(G), clover leaf tRNAphe and least with ss poly(A). The results highlight the importance of structural elements in these natural beta-carboline alkaloids in stabilizing different DNA and RNA of various motifs for developing nucleic acid based better therapeutic agents.Keywords: calorimetry, docking, DNA/RNA-alkaloid interaction, harmalol, spectroscopy
Procedia PDF Downloads 228