Search results for: structural structural equation modeling (SEM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8995

Search results for: structural structural equation modeling (SEM)

715 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: customer value, Huff's Gravity Model, POS, Retailer

Procedia PDF Downloads 123
714 A Model of the Universe without Expansion of Space

Authors: Jia-Chao Wang

Abstract:

A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.

Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction

Procedia PDF Downloads 133
713 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches

Authors: Vahid Nourani, Atefeh Ashrafi

Abstract:

Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.

Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant

Procedia PDF Downloads 128
712 VR in the Middle School Classroom-An Experimental Study on Spatial Relations and Immersive Virtual Reality

Authors: Danielle Schneider, Ying Xie

Abstract:

Middle school science, technology, engineering, and math (STEM) teachers experience an exceptional challenge in the expectation to incorporate curricula that builds strong spatial reasoning skills on rudimentary geometry concepts. Because spatial ability is so closely tied to STEM students’ success, researchers are tasked to determine effective instructional practices that create an authentic learning environment within the immersive virtual reality learning environment (IVRLE). This study looked to investigate the effect of the IVRLE on middle school STEM students’ spatial reasoning skills as a methodology to benefit the STEM middle school students’ spatial reasoning skills. This experimental study was comprised of thirty 7th-grade STEM students divided into a treatment group that was engaged in an immersive VR platform where they engaged in building an object in the virtual realm by applying spatial processing and visualizing its dimensions and a control group that built the identical object using a desktop computer-based, computer-aided design (CAD) program. Before and after the students participated in the respective “3D modeling” environment, their spatial reasoning abilities were assessed using the Middle Grades Mathematics Project Spatial Visualization Test (MGMP-SVT). Additionally, both groups created a physical 3D model as a secondary measure to measure the effectiveness of the IVRLE. The results of a one-way ANOVA in this study identified a negative effect on those in the IVRLE. These findings suggest that with middle school students, virtual reality (VR) proved an inadequate tool to benefit spatial relation skills as compared to desktop-based CAD.

Keywords: virtual reality, spatial reasoning, CAD, middle school STEM

Procedia PDF Downloads 86
711 Nonlinear Interaction of Free Surface Sloshing of Gaussian Hump with Its Container

Authors: Mohammad R. Jalali

Abstract:

Movement of liquid with a free surface in a container is known as slosh. For instance, slosh occurs when water in a closed tank is set in motion by a free surface displacement, or when liquid natural gas in a container is vibrated by an external driving force, such as an earthquake or movement induced by transport. Slosh is also derived from resonant switching of a natural basin. During sloshing, different types of motion are produced by energy exchange between the liquid and its container. In present study, a numerical model is developed to simulate the nonlinear even harmonic oscillations of free surface sloshing of an initial disturbance to the free surface of a liquid in a closed square basin. The response of the liquid free surface is affected by amplitude and motion frequencies of its container; therefore, sloshing involves complex fluid-structure interactions. In the present study, nonlinear interaction of free surface sloshing of an initial Gaussian hump with its uneven container is predicted numerically. For this purpose, Green-Naghdi (GN) equations are applied as governing equation of fluid field to produce nonlinear second-order and higher-order wave interactions. These equations reduce the dimensions from three to two, yielding equations that can be solved efficiently. The GN approach assumes a particular flow kinematic structure in the vertical direction for shallow and deep-water problems. The fluid velocity profile is finite sum of coefficients depending on space and time multiplied by a weighting function. It should be noted that in GN theory, the flow is rotational. In this study, GN numerical simulations of initial Gaussian hump are compared with Fourier series semi-analytical solutions of the linearized shallow water equations. The comparison reveals that satisfactory agreement exists between the numerical simulation and the analytical solution of the overall free surface sloshing patterns. The resonant free surface motions driven by an initial Gaussian disturbance are obtained by Fast Fourier Transform (FFT) of the free surface elevation time history components. Numerically predicted velocity vectors and magnitude contours for the free surface patterns indicate that interaction of Gaussian hump with its container has localized effect. The result of this sloshing is applicable to the design of stable liquefied oil containers in tankers and offshore platforms.

Keywords: fluid-structure interactions, free surface sloshing, Gaussian hump, Green-Naghdi equations, numerical predictions

Procedia PDF Downloads 398
710 The Hidden Role of Interest Rate Risks in Carry Trades

Authors: Jingwen Shi, Qi Wu

Abstract:

We study the role played interest rate risk in carry trade return in order to understand the forward premium puzzle. In this study, our goal is to investigate to what extent carry trade return is indeed due to compensation for risk taking and, more important, to reveal the nature of these risks. Using option data not only on exchange rates but also on interest rate swaps (swaptions), our first finding is that, besides the consensus currency risks, interest rate risks also contribute a non-negligible portion to the carry trade return. What strikes us is our second finding. We find that large downside risks of future exchange rate movements are, in fact, priced significantly in option market on interest rates. The role played by interest rate risk differs structurally from the currency risk. There is a unique premium associated with interest rate risk, though seemingly small in size, which compensates the tail risks, the left tail to be precise. On the technical front, our study relies on accurately retrieving implied distributions from currency options and interest rate swaptions simultaneously, especially the tail components of the two. For this purpose, our major modeling work is to build a new international asset pricing model where we use an orthogonal setup for pricing kernels and specify non-Gaussian dynamics in order to capture three sets of option skew accurately and consistently across currency options and interest rate swaptions, domestic and foreign, within one model. Our results open a door for studying forward premium anomaly through implied information from interest rate derivative market.

Keywords: carry trade, forward premium anomaly, FX option, interest rate swaption, implied volatility skew, uncovered interest rate parity

Procedia PDF Downloads 445
709 Exploring Hydrogen Embrittlement and Fatigue Crack Growth in API 5L X52 Steel Pipeline Under Cyclic Internal Pressure

Authors: Omar Bouledroua, Djamel Zelmati, Zahreddine Hafsi, Milos B. Djukic

Abstract:

Transporting hydrogen gas through the existing natural gas pipeline network offers an efficient solution for energy storage and conveyance. Hydrogen generated from excess renewable electricity can be conveyed through the API 5L steel-made pipelines that already exist. In recent years, there has been a growing demand for the transportation of hydrogen through existing gas pipelines. Therefore, numerical and experimental tests are required to verify and ensure the mechanical integrity of the API 5L steel pipelines that will be used for pressurized hydrogen transportation. Internal pressure loading is likely to accelerate hydrogen diffusion through the internal pipe wall and consequently accentuate the hydrogen embrittlement of steel pipelines. Furthermore, pre-cracked pipelines are susceptible to quick failure, mainly under a time-dependent cyclic pressure loading that drives fatigue crack propagation. Meanwhile, after several loading cycles, the initial cracks will propagate to a critical size. At this point, the remaining service life of the pipeline can be estimated, and inspection intervals can be determined. This paper focuses on the hydrogen embrittlement of API 5L steel-made pipeline under cyclic pressure loading. Pressurized hydrogen gas is transported through a network of pipelines where demands at consumption nodes vary periodically. The resulting pressure profile over time is considered a cyclic loading on the internal wall of a pre-cracked pipeline made of API 5L steel-grade material. Numerical modeling has allowed the prediction of fatigue crack evolution and estimation of the remaining service life of the pipeline. The developed methodology in this paper is based on the ASME B31.12 standard, which outlines the guidelines for hydrogen pipelines.

Keywords: hydrogen embrittlement, pipelines, transient flow, cyclic pressure, fatigue crack growth

Procedia PDF Downloads 88
708 Evaluation of Effectiveness of Three Common Equine Thrush Treatments

Authors: A. S. Strait, J. A. Bryk-Lucy, L. M. Ritchie

Abstract:

Thrush is a common disease of ungulates primarily affecting the frog and sulci, caused by the anaerobic bacteria Fusobacterium necrophorum. Thrush accounts for approximately 45.0% of hoof disorders in horses. Prevention and treatment of thrush are essential to prevent horses from developing severe infections and becoming lame. Proper knowledge of hoof care and thrush treatments is crucial to avoid financial costs, unsoundness and lost training time. Research on the effectiveness of numerous commercial and homemade thrush treatments is limited in the equine industry. The objective of this study was to compare the effectiveness of three common thrush treatments for horses: weekly application of Thrush Buster, daily dilute bleach solution spray, or Metronidazole pastes every other day. Cases of thrush diagnosed by a veterinarian or veterinarian-trained researcher were given a score, from 0 to 4, based on the severity of the thrush in each hoof (n=59) and randomly assigned a treatment. Cases were rescored each week of the three-week treatment, and the final and initial scores were compared to determine effectiveness. The thrush treatments were compared with Thrush Buster as the reference at a significance level of α=.05. Binomial Logistic Regression Modeling was performed, finding that the odds of a hoof treated with Metronidazole to be thrush-free was 6.1 times greater than a hoof treated with Thrush Buster (p=0.001), while the odds of a hoof that was treated with bleach to be thrush-free was only 0.97 times greater than a hoof treated with Thrush Buster (p=0.970), after adjustment for treatment week. Of the three treatments utilized in this study, Metronidazole paste applied to the affected areas every other day was the most effective treatment for thrush in horses. There are many other thrush remedies available, and further research is warranted to determine the efficacy of additional treatment options.

Keywords: fusobacterium necrophorum, thrush, equine, horse, lameness

Procedia PDF Downloads 156
707 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes

Authors: V. Churkin, M. Lopatin

Abstract:

The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second –95,3%.

Keywords: bass model, generalized bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States

Procedia PDF Downloads 348
706 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue

Authors: Rachel Y. Zhang, Christopher K. Anderson

Abstract:

A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.

Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine

Procedia PDF Downloads 132
705 Spatiotemporal Modeling of Under-Five Mortality and Associated Risk Factors in Ethiopia

Authors: Melkamu A. Zeru, Aweke A. Mitiku, Endashaw Amuka

Abstract:

Background: Under-five mortality is the likelihood that a baby will pass away before turning exactly 5 years old, represented as a percentage per 1,000 live births. Exploring the spatial distribution and identifying the temporal pattern is important to reducing under-five child mortality globally, including in Ethiopia. Thus, this study aimed to identify the risk factors of under-five mortality and the spatiotemporal variation in Ethiopian administrative zones. Method: This study used the 2000-2016 Ethiopian Demographic and Health Survey (EDHS) data, which were collected using a two-stage sampling method. A total of 43,029 (10,873 in 2000, 9,861 in 2005, 11,654 in 2011, and 10,641 in 2016) weighted sample under-five child mortality was used. The space-time dynamic model was employed to account for spatial and time effects in 65 administrative zones in Ethiopia. Results: From the result of a general nesting spatial-temporal dynamic model, there was a significant space-time interaction effect [γ = -0.1444, 95 % CI (-0.6680, -0.1355)] for under-five mortality. The increase in the percentages of mothers illiteracy [𝛽 = 0.4501, 95% CI (0.2442, 0.6559)], not vaccinated[𝛽= 0.7681, 95% CI (0.5683, 0.9678)], unimproved water[𝛽= 0.5801, CI (0.3793, 0.7808)] were increased death rates for under five children while increased percentage of contraceptive use [𝛽= -0.6609, 95% CI (-0.8636, -0.4582)] and ANC visit > 4 times [𝛽= -0.1585, 95% CI(-0.1812, -0.1357)] were contributed to the decreased under-five mortality rate at the zone in Ethiopia. Conclusions: Even though the mortality rate for children under five has decreased over time, still there is still higher in different zones of Ethiopia. There exists spatial and temporal variation in under-five mortality among zones. Therefore, it is very important to consider spatial neighbourhoods and temporal context when aiming to avoid under-five mortality.

Keywords: under-five children mortality, space-time dynamic, spatiotemporal, Ethiopia

Procedia PDF Downloads 37
704 Cognitive Science Based Scheduling in Grid Environment

Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya

Abstract:

Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.

Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence

Procedia PDF Downloads 394
703 Comparison of E-learning and Face-to-Face Learning Models Through the Early Design Stage in Architectural Design Education

Authors: Gülay Dalgıç, Gildis Tachir

Abstract:

Architectural design studios are ambiencein where architecture design is realized as a palpable product in architectural education. In the design studios that the architect candidate will use in the design processthe information, the methods of approaching the design problem, the solution proposals, etc., are set uptogetherwith the studio coordinators. The architectural design process, on the other hand, is complex and uncertain.Candidate architects work in a process that starts with abstre and ill-defined problems. This process starts with the generation of alternative solutions with the help of representation tools, continues with the selection of the appropriate/satisfactory solution from these alternatives, and then ends with the creation of an acceptable design/result product. In the studio ambience, many designs and thought relationships are evaluated, the most important step is the early design phase. In the early design phase, the first steps of converting the information are taken, and converted information is used in the constitution of the first design decisions. This phase, which positively affects the progress of the design process and constitution of the final product, is complex and fuzzy than the other phases of the design process. In this context, the aim of the study is to investigate the effects of face-to-face learning model and e-learning model on the early design phase. In the study, the early design phase was defined by literature research. The data of the defined early design phase criteria were obtained with the feedback graphics created for the architect candidates who performed e-learning in the first year of architectural education and continued their education with the face-to-face learning model. The findings of the data were analyzed with the common graphics program. It is thought that this research will contribute to the establishment of a contemporary architectural design education model by reflecting the evaluation of the data and results on architectural education.

Keywords: education modeling, architecture education, design education, design process

Procedia PDF Downloads 137
702 Bridge Members Segmentation Algorithm of Terrestrial Laser Scanner Point Clouds Using Fuzzy Clustering Method

Authors: Donghwan Lee, Gichun Cha, Jooyoung Park, Junkyeong Kim, Seunghee Park

Abstract:

3D shape models of the existing structure are required for many purposes such as safety and operation management. The traditional 3D modeling methods are based on manual or semi-automatic reconstruction from close-range images. It occasions great expense and time consuming. The Terrestrial Laser Scanner (TLS) is a common survey technique to measure quickly and accurately a 3D shape model. This TLS is used to a construction site and cultural heritage management. However there are many limits to process a TLS point cloud, because the raw point cloud is massive volume data. So the capability of carrying out useful analyses is also limited with unstructured 3-D point. Thus, segmentation becomes an essential step whenever grouping of points with common attributes is required. In this paper, members segmentation algorithm was presented to separate a raw point cloud which includes only 3D coordinates. This paper presents a clustering approach based on a fuzzy method for this objective. The Fuzzy C-Means (FCM) is reviewed and used in combination with a similarity-driven cluster merging method. It is applied to the point cloud acquired with Lecia Scan Station C10/C5 at the test bed. The test-bed was a bridge which connects between 1st and 2nd engineering building in Sungkyunkwan University in Korea. It is about 32m long and 2m wide. This bridge was used as pedestrian between two buildings. The 3D point cloud of the test-bed was constructed by a measurement of the TLS. This data was divided by segmentation algorithm for each member. Experimental analyses of the results from the proposed unsupervised segmentation process are shown to be promising. It can be processed to manage configuration each member, because of the segmentation process of point cloud.

Keywords: fuzzy c-means (FCM), point cloud, segmentation, terrestrial laser scanner (TLS)

Procedia PDF Downloads 234
701 Understanding the Challenges of Lawbook Translation via the Framework of Functional Theory of Language

Authors: Tengku Sepora Tengku Mahadi

Abstract:

Where the speed of book writing lags behind the high need for such material for tertiary studies, translation offers a way to enhance the equilibrium in this demand-supply equation. Nevertheless, translation is confronted by obstacles that threaten its effectiveness. The primary challenge to the production of efficient translations may well be related to the text-type and in terms of its complexity. A text that is intricately written with unique rhetorical devices, subject-matter foundation and cultural references will undoubtedly challenge the translator. Longer time and greater effort would be the consequence. To understand these text-related challenges, the present paper set out to analyze a lawbook entitled Learning the Law by David Melinkoff. The book is chosen because it has often been used as a textbook or for reference in many law courses in the United Kingdom and has seen over thirteen editions; therefore, it can be said to be a worthy book for studies in law. Another reason is the existence of a ready translation in Malay. Reference to this translation enables confirmation to some extent of the potential problems that might occur in its translation. Understanding the organization and the language of the book will help translators to prepare themselves better for the task. They can anticipate the research and time that may be needed to produce an effective translation. Another premise here is that this text-type implies certain ways of writing and organization. Accordingly, it seems practicable to adopt the functional theory of language as suggested by Michael Halliday as its theoretical framework. Concepts of the context of culture, the context of situation and measures of the field, tenor and mode form the instruments for analysis. Additional examples from similar materials can also be used to validate the findings. Some interesting findings include the presence of several other text-types or sub-text-types in the book and the dependence on literary discourse and devices to capture the meanings better or add color to the dry field of law. In addition, many elements of culture can be seen, for example, the use of familiar alternatives, allusions, and even terminology and references that date back to various periods of time and languages. Also found are parts which discuss origins of words and terms that may be relevant to readers within the United Kingdom but make little sense to readers of the book in other languages. In conclusion, the textual analysis in terms of its functions and the linguistic and textual devices used to achieve them can then be applied as a guide to determine the effectiveness of the translation that is produced.

Keywords: functional theory of language, lawbook text-type, rhetorical devices, culture

Procedia PDF Downloads 149
700 Control of Doxorubicin Release Rate from Magnetic PLGA Nanoparticles Using a Non-Permanent Magnetic Field

Authors: Inês N. Peça , A. Bicho, Rui Gardner, M. Margarida Cardoso

Abstract:

Inorganic/organic nanocomplexes offer tremendous scope for future biomedical applications, including imaging, disease diagnosis and drug delivery. The combination of Fe3O4 with biocompatible polymers to produce smart drug delivery systems for use in pharmaceutical formulation present a powerful tool to target anti-cancer drugs to specific tumor sites through the application of an external magnetic field. In the present study, we focused on the evaluation of the effect of the magnetic field application time on the rate of drug release from iron oxide polymeric nanoparticles. Doxorubicin, an anticancer drug, was selected as the model drug loaded into the nanoparticles. Nanoparticles composed of poly(d-lactide-co-glycolide (PLGA), a biocompatible polymer already approved by FDA, containing iron oxide nanoparticles (MNP) for magnetic targeting and doxorubicin (DOX) were synthesized by the o/w solvent extraction/evaporation method and characterized by scanning electron microscopy (SEM), by dynamic light scattering (DLS), by inductively coupled plasma-atomic emission spectrometry and by Fourier transformed infrared spectroscopy. The produced particles yielded smooth surfaces and spherical shapes exhibiting a size between 400 and 600 nm. The effect of the magnetic doxorubicin loaded PLGA nanoparticles produced on cell viability was investigated in mammalian CHO cell cultures. The results showed that unloaded magnetic PLGA nanoparticles were nontoxic while the magnetic particles without polymeric coating show a high level of toxicity. Concerning the therapeutic activity doxorubicin loaded magnetic particles cause a remarkable enhancement of the cell inhibition rates compared to their non-magnetic counterpart. In vitro drug release studies performed under a non-permanent magnetic field show that the application time and the on/off cycle duration have a great influence with respect to the final amount and to the rate of drug release. In order to determine the mechanism of drug release, the data obtained from the release curves were fitted to the semi-empirical equation of the the Korsmeyer-Peppas model that may be used to describe the Fickian and non-Fickian release behaviour. Doxorubicin release mechanism has shown to be governed mainly by Fickian diffusion. The results obtained show that the rate of drug release from the produced magnetic nanoparticles can be modulated through the magnetic field time application.

Keywords: drug delivery, magnetic nanoparticles, PLGA nanoparticles, controlled release rate

Procedia PDF Downloads 259
699 Comparative Study of Flood Plain Protection Zone Determination Methodologies in Colombia, Spain and Canada

Authors: P. Chang, C. Lopez, C. Burbano

Abstract:

Flood protection zones are riparian buffers that are formed to manage and mitigate the impact of flooding, and in turn, protect local populations. The purpose of this study was to evaluate the Guía Técnica de Criterios para el Acotamiento de las Rondas Hídricas in Colombia against international regulations in Canada and Spain, in order to determine its limitations and contribute to its improvement. The need to establish a specific corridor that allows for the dynamic development of a river is clear; however, limitations present in the Colombian Technical Guide are identified. The study shows that international regulations provide similar concepts as used in Colombia, but additionally integrate aspects such as regionalization that allows for a better characterization of the channel way, and incorporate the frequency of flooding and its probability of occurrence in the concept of risk when determining the protection zone. The case study analyzed in Dosquebradas - Risaralda aimed at comparing the application of the different standards through hydraulic modeling. It highlights that the current Colombian standard does not offer sufficient details in its implementation phase, which leads to a false sense of security related to inaccuracy and lack of data. Furthermore, the study demonstrates how the Colombian norm is ill-adapted to the conditions of Dosquebradas typical of the Andes region, both in the social and hydraulic aspects, and does not reduce the risk, nor does it improve the protection of the population. Our study considers it pertinent to include risk estimation as an integral part of the methodology when establishing protect flood zone, considering the particularity of water systems, as they are characterized by an heterogeneous natural dynamic behavior.

Keywords: environmental corridor, flood zone determination, hydraulic domain, legislation flood protection zone

Procedia PDF Downloads 113
698 Numerical Modeling of Film Cooling of the Surface at Non-Uniform Heat Flux Distributions on the Wall

Authors: M. V. Bartashevich

Abstract:

The problem of heat transfer at thin laminar liquid film is solved numerically. A thin film of liquid flows down an inclined surface under conditions of variable heat flux on the wall. The use of thin films of liquid allows to create the effective technologies for cooling surfaces. However, it is important to investigate the most suitable cooling regimes from a safety point of view, in order, for example, to avoid overheating caused by the ruptures of the liquid film, and also to study the most effective cooling regimes depending on the character of the distribution of the heat flux on the wall, as well as the character of the blowing of the film surface, i.e., the external shear stress on its surface. In the statement of the problem on the film surface, the heat transfer coefficient between the liquid and gas is set, as well as a variable external shear stress - the intensity of blowing. It is shown that the combination of these factors - the degree of uniformity of the distribution of heat flux on the wall and the intensity of blowing, affects the efficiency of heat transfer. In this case, with an increase in the intensity of blowing, the cooling efficiency increases, reaching a maximum, and then decreases. It is also shown that the more uniform the heating of the wall, the more efficient the heat sink. A separate study was made for the flow regime along the horizontal surface when the liquid film moves solely due to external stress influence. For this mode, the analytical solution is used for the temperature at the entrance region for further numerical calculations downstream. Also the influence of the degree of uniformity of the heat flux distribution on the wall and the intensity of blowing of the film surface on the heat transfer efficiency was also studied. This work was carried out at the Kutateladze Institute of Thermophysics SB RAS (Russia) and supported by FASO Russia.

Keywords: Heat Flux, Heat Transfer Enhancement, External Blowing, Thin Liquid Film

Procedia PDF Downloads 149
697 Evaluation of an Integrated Supersonic System for Inertial Extraction of CO₂ in Post-Combustion Streams of Fossil Fuel Operating Power Plants

Authors: Zarina Chokparova, Ighor Uzhinsky

Abstract:

Carbon dioxide emissions resulting from burning of the fossil fuels on large scales, such as oil industry or power plants, leads to a plenty of severe implications including global temperature raise, air pollution and other adverse impacts on the environment. Besides some precarious and costly ways for the alleviation of CO₂ emissions detriment in industrial scales (such as liquefaction of CO₂ and its deep-water treatment, application of adsorbents and membranes, which require careful consideration of drawback effects and their mitigation), one physically and commercially available technology for its capture and disposal is supersonic system for inertial extraction of CO₂ in after-combustion streams. Due to the flue gas with a carbon dioxide concentration of 10-15 volume percent being emitted from the combustion system, the waste stream represents a rather diluted condition at low pressure. The supersonic system induces a flue gas mixture stream to expand using a converge-and-diverge operating nozzle; the flow velocity increases to the supersonic ranges resulting in rapid drop of temperature and pressure. Thus, conversion of potential energy into the kinetic power causes a desublimation of CO₂. Solidified carbon dioxide can be sent to the separate vessel for further disposal. The major advantages of the current solution are its economic efficiency, physical stability, and compactness of the system, as well as needlessness of addition any chemical media. However, there are several challenges yet to be regarded to optimize the system: the way for increasing the size of separated CO₂ particles (as they are represented on a micrometers scale of effective diameter), reduction of the concomitant gas separated together with carbon dioxide and provision of CO₂ downstream flow purity. Moreover, determination of thermodynamic conditions of the vapor-solid mixture including specification of the valid and accurate equation of state remains to be an essential goal. Due to high speeds and temperatures reached during the process, the influence of the emitted heat should be considered, and the applicable solution model for the compressible flow need to be determined. In this report, a brief overview of the current technology status will be presented and a program for further evaluation of this approach is going to be proposed.

Keywords: CO₂ sequestration, converging diverging nozzle, fossil fuel power plant emissions, inertial CO₂ extraction, supersonic post-combustion carbon dioxide capture

Procedia PDF Downloads 141
696 The Trade Flow of Small Association Agreements When Rules of Origin Are Relaxed

Authors: Esmat Kamel

Abstract:

This paper aims to shed light on the extent to which the Agadir Association agreement has fostered inter regional trade between the E.U_26 and the Agadir_4 countries; once that we control for the evolution of Agadir agreement’s exports to the rest of the world. The next valid question will be regarding any remarkable variation in the spatial/sectoral structure of exports, and to what extent has it been induced by the Agadir agreement itself and precisely after the adoption of rules of origin and the PANEURO diagonal cumulative scheme? The paper’s empirical dataset covering a timeframe from [2000 -2009] was designed to account for sector specific export and intermediate flows and the bilateral structured gravity model was custom tailored to capture sector and regime specific rules of origin and the Poisson Pseudo Maximum Likelihood Estimator was used to calculate the gravity equation. The methodological approach of this work is considered to be a threefold one which starts first by conducting a ‘Hierarchal Cluster Analysis’ to classify final export flows showing a certain degree of linkage between each other. The analysis resulted in three main sectoral clusters of exports between Agadir_4 and E.U_26: cluster 1 for Petrochemical related sectors, cluster 2 durable goods and finally cluster 3 for heavy duty machinery and spare parts sectors. Second step continues by taking export flows resulting from the 3 clusters to be subject to treatment with diagonal Rules of origin through ‘The Double Differences Approach’, versus an equally comparable untreated control group. Third step is to verify results through a robustness check applied by ‘Propensity Score Matching’ to validate that the same sectoral final export and intermediate flows increased when rules of origin were relaxed. Through all the previous analysis, a remarkable and partial significance of the interaction term combining both treatment effects and time for the coefficients of 13 out of the 17 covered sectors turned out to be partially significant and it further asserted that treatment with diagonal rules of origin contributed in increasing Agadir’s_4 final and intermediate exports to the E.U._26 on average by 335% and in changing Agadir_4 exports structure and composition to the E.U._26 countries.

Keywords: agadir association agreement, structured gravity model, hierarchal cluster analysis, double differences estimation, propensity score matching, diagonal and relaxed rules of origin

Procedia PDF Downloads 318
695 An Analysis of Different Essential Components of Flight Plan Operations at Low Altitude

Authors: Apisit Nawapanpong, Natthapat Boonjerm

Abstract:

This project aims to analyze and identify the flight plan of low-altitude aviation in Thailand and other countries. The development of UAV technology has led the innovation and revolution in the aviation industry; this includes the development of new modes of passenger or freight transportation, and it has also affected other industries widely. At present, this technology is being developed rapidly and has been tested all over the world to make the most efficient for technology or innovation, and it is likely to grow more extensively. However, no flight plan for low-altitude operation has been published by the government organization; when compared with high-altitude aviation with manned aircraft, various unique factors are different, whether mission, operation, altitude range or airspace restrictions. In the study of the essential components of low-altitude operation measures to be practical and tangible, there were major problems, so the main consideration of this project is to analyze the components of low-altitude operations which are conducted up to the altitudes of 400 ft or 120 meters above ground level referring to the terrain, for example, air traffic management, classification of aircraft, basic necessity and safety, and control area. This research will focus on confirming the theory through qualitative and quantitative research combined with theoretical modeling and regulatory framework and by gaining insights from various positions in aviation industries, including aviation experts, government officials, air traffic controllers, pilots, and airline operators to identify the critical essential components of low-altitude flight operation. This project analyzes by using computer programs for science and statistics research to prove that the result is equivalent to the theory and be beneficial for regulating the flight plan for low-altitude operation by different essential components from this project and can be further developed for future studies and research in aviation industries.

Keywords: low-altitude aviation, UAV technology, flight plan, air traffic management, safety measures

Procedia PDF Downloads 68
694 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models

Authors: Azadeh Jafari, Robert G. Owens

Abstract:

In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.

Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics

Procedia PDF Downloads 361
693 Rational Approach to the Design of a Sustainable Drainage System for Permanent Site of Federal Polytechnic Oko: A Case Study for Flood Mitigation and Environmental Management

Authors: Fortune Chibuike Onyia, Femi Ogundeji Ayodele

Abstract:

The design of a drainage system at the permanent site of Federal Polytechnic Oko in Anambra State is critical for mitigating flooding, managing surface runoff, and ensuring environmental sustainability. The design process employed a comprehensive analysis involving topographical surveys, hydraulic modeling, and the assessment of local soil types to ensure stability and efficient water conveyance. Proper slope gradients were considered to maintain adequate flow velocities and avoid sediment deposition, which could hinder long-term performance. From the result, the channel size estimated was 0.199m by 0.0199m and 0.0199m². This study proposed a channel size of 1.4m depth by 0.5m width and 0.7m², optimized to accommodate the anticipated peak flow resulting from heavy rainfall and storm-water events. This sizing is based on hydrological data, which takes into account rainfall intensity, runoff coefficients, and catchment area characteristics. The objective is to effectively convey storm-water while preventing overflow, erosion, and subsequent damage to infrastructure and properties. This sustainable approach incorporates provisions for maintenance and aligns with urban drainage standards to enhance durability and reliability. Implementing this drainage system will mitigate flood risks, safeguard campus facilities, improve overall water management, and contribute to the development of resilient infrastructure at Federal Polytechnic Oko.

Keywords: flood mitigation, drainage system, sustainable design, environmental management

Procedia PDF Downloads 6
692 Implication of Fractal Kinetics and Diffusion Limited Reaction on Biomass Hydrolysis

Authors: Sibashish Baksi, Ujjaini Sarkar, Sudeshna Saha

Abstract:

In the present study, hydrolysis of Pinus roxburghi wood powder was carried out with Viscozyme, and kinetics of the hydrolysis has been investigated. Finely ground sawdust is submerged into 2% aqueous peroxide solution (pH=11.5) and pretreated through autoclaving, probe sonication, and alkaline peroxide pretreatment. Afterward, the pretreated material is subjected to hydrolysis. A chain of experiments was executed with delignified biomass (50 g/l) and varying enzyme concentrations (24.2–60.5 g/l). In the present study, 14.32 g/l of glucose, along with 7.35 g/l of xylose, have been recovered with a viscozyme concentration of 48.8 g/l and the same condition was treated as optimum condition. Additionally, thermal deactivation of viscozyme has been investigated and found to be gradually decreasing with escalated enzyme loading from 48.4 g/l (dissociation constant= 0.05 h⁻¹) to 60.5 g/l (dissociation constant= 0.02 h⁻¹). The hydrolysis reaction is a pseudo first-order reaction, and therefore, the rate of the hydrolysis can be expressed as a fractal-like kinetic equation that communicates between the product concentration and hydrolytic time t. It is seen that the value of rate constant (K) increases from 0.008 to 0.017 with augmented enzyme concentration from 24.2 g/l to 60.5 g/l. Greater value of K is associated with stronger enzyme binding capacity of the substrate mass. However, escalated concentration of supplied enzyme ensures improved interaction with more substrate molecules resulting in an enhanced de-polymerization of the polymeric sugar chains per unit time which eventually modifies the physiochemical structure of biomass. All fractal dimensions are in between 0 and 1. Lower the value of fractal dimension, more easily the biomass get hydrolyzed. It can be seen that with increased enzyme concentration from 24.2 g/l to 48.4 g/l, the values of fractal dimension go down from 0.1 to 0.044. This indicates that the presence of more enzyme molecules can more easily hydrolyze the substrate. However, an increased value has been observed with a further increment of enzyme concentration to 60.5g/l because of diffusional limitation. It is evident that the hydrolysis reaction system is a heterogeneous organization, and the product formation rate depends strongly on the enzyme diffusion resistances caused by the rate-limiting structures of the substrate-enzyme complex. Value of the rate constant increases from 1.061 to 2.610 with escalated enzyme concentration from 24.2 to 48.4 g/l. As the rate constant is proportional to Fick’s diffusion coefficient, it can be assumed that with a higher concentration of enzyme, a larger amount of enzyme mass dM diffuses into the substrate through the surface dF per unit time dt. Therefore, a higher rate constant value is associated with a faster diffusion of enzyme into the substrate. Regression analysis of time curves with various enzyme concentrations shows that diffusion resistant constant increases from 0.3 to 0.51 for the first two enzyme concentrations and again decreases with enzyme concentration of 60.5 g/l. During diffusion in a differential scale, the enzyme also experiences a greater resistance during diffusion of larger dM through dF in dt.

Keywords: viscozyme, glucose, fractal kinetics, thermal deactivation

Procedia PDF Downloads 111
691 Patient-Specific Design Optimization of Cardiovascular Grafts

Authors: Pegah Ebrahimi, Farshad Oveissi, Iman Manavi-Tehrani, Sina Naficy, David F. Fletcher, Fariba Dehghani, David S. Winlaw

Abstract:

Despite advances in modern surgery, congenital heart disease remains a medical challenge and a major cause of infant mortality. Cardiovascular prostheses are routinely used in surgical procedures to address congenital malformations, for example establishing a pathway from the right ventricle to the pulmonary arteries in pulmonary valvar atresia. Current off-the-shelf options including human and adult products have limited biocompatibility and durability, and their fixed size necessitates multiple subsequent operations to upsize the conduit to match with patients’ growth over their lifetime. Non-physiological blood flow is another major problem, reducing the longevity of these prostheses. These limitations call for better designs that take into account the hemodynamical and anatomical characteristics of different patients. We have integrated tissue engineering techniques with modern medical imaging and image processing tools along with mathematical modeling to optimize the design of cardiovascular grafts in a patient-specific manner. Computational Fluid Dynamics (CFD) analysis is done according to models constructed from each individual patient’s data. This allows for improved geometrical design and achieving better hemodynamic performance. Tissue engineering strives to provide a material that grows with the patient and mimic the durability and elasticity of the native tissue. Simulations also give insight on the performance of the tissues produced in our lab and reduce the need for costly and time-consuming methods of evaluation of the grafts. We are also developing a methodology for the fabrication of the optimized designs.

Keywords: computational fluid dynamics, cardiovascular grafts, design optimization, tissue engineering

Procedia PDF Downloads 242
690 Promoting Patients' Adherence to Home-Based Rehabilitation: A Randomised Controlled Trial of a Theory-Driven Mobile Application

Authors: Derwin K. C. Chan, Alfred S. Y. Lee

Abstract:

The integrated model of self-determination theory and the theory of planned behaviour has been successfully applied to explain individuals’ adherence to health behaviours, including behavioural adherence toward rehabilitation. This study was a randomised controlled trial that examined the effectiveness of an mHealth intervention (i.e., mobile application) developed based on this integrated model in promoting treatment adherence of patients of anterior cruciate ligament rupture during their post-surgery home-based rehabilitation period. Subjects were 67 outpatients (aged between 18 and 60) who undertook anterior cruciate ligament (ACL) reconstruction surgery for less than 2 months for this study. Participants were randomly assigned either into the treatment group (who received the smartphone application; N = 32) and control group (who receive standard treatment only; N = 35), and completed psychological measures relating to the theories (e.g., motivations, social cognitive factors, and behavioural adherence) and clinical outcome measures (e.g., subjective knee function (IKDC), laxity (KT-1000), muscle strength (Biodex)) relating to ACL recovery at baseline, 2-month, and 4-month. Generalise estimating equation showed the interaction between group and time was significant on intention was only significant for intention (Wald x² = 5.23, p = .02), that of perceived behavioural control (Wald x² = 3.19, p = .07), behavioural adherence (Wald x² = 3.08, p = .08, and subjective knee evaluation (Wald x² = 2.97, p = .09) were marginally significant. Post-hoc between-subject analysis showed that control group had significant drop of perceived behavioural control (p < .01), subjective norm (p < .01) and intention (p < .01), behavioural adherence (p < .01) from baseline to 4-month, but such pattern was not observed in the treatment group. The treatment group had a significant decrease of behavioural adherence (p < .05) in the 2-month, but such a decrease was not observed in 4-month (p > .05). Although the subjective knee evaluation in both group significantly improved at 2-month and 4-month from the baseline (p < .05), and the improvements in the control group (mean improvement at 4-month = 40.18) were slightly stronger than the treatment group (mean improvement at 4-month = 34.52). In conclusion, the findings showed that the theory driven mobile application ameliorated the decline of treatment intention of home-based rehabilitation. Patients in the treatment group also reported better muscle strength than control group at 4-month follow-up. Overall, the mobile application has shown promises on tackling the problem of orthopaedics outpatients’ non-adherence to medical treatment.

Keywords: self-determination theory, theory of planned behaviour, mobile health, orthopaedic patients

Procedia PDF Downloads 198
689 Optimizing Parallel Computing Systems: A Java-Based Approach to Modeling and Performance Analysis

Authors: Maher Ali Rusho, Sudipta Halder

Abstract:

The purpose of the study is to develop optimal solutions for models of parallel computing systems using the Java language. During the study, programmes were written for the examined models of parallel computing systems. The result of the parallel sorting code is the output of a sorted array of random numbers. When processing data in parallel, the time spent on processing and the first elements of the list of squared numbers are displayed. When processing requests asynchronously, processing completion messages are displayed for each task with a slight delay. The main results include the development of optimisation methods for algorithms and processes, such as the division of tasks into subtasks, the use of non-blocking algorithms, effective memory management, and load balancing, as well as the construction of diagrams and comparison of these methods by characteristics, including descriptions, implementation examples, and advantages. In addition, various specialised libraries were analysed to improve the performance and scalability of the models. The results of the work performed showed a substantial improvement in response time, bandwidth, and resource efficiency in parallel computing systems. Scalability and load analysis assessments were conducted, demonstrating how the system responds to an increase in data volume or the number of threads. Profiling tools were used to analyse performance in detail and identify bottlenecks in models, which improved the architecture and implementation of parallel computing systems. The obtained results emphasise the importance of choosing the right methods and tools for optimising parallel computing systems, which can substantially improve their performance and efficiency.

Keywords: algorithm optimisation, memory management, load balancing, performance profiling, asynchronous programming.

Procedia PDF Downloads 12
688 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete

Authors: Farzad Danaei, Yilmaz Akkaya

Abstract:

In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.

Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient

Procedia PDF Downloads 77
687 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 401
686 Functional Connectivity Signatures of Polygenic Depression Risk in Youth

Authors: Louise Moles, Steve Riley, Sarah D. Lichenstein, Marzieh Babaeianjelodar, Robert Kohler, Annie Cheng, Corey Horien Abigail Greene, Wenjing Luo, Jonathan Ahern, Bohan Xu, Yize Zhao, Chun Chieh Fan, R. Todd Constable, Sarah W. Yip

Abstract:

Background: Risks for depression are myriad and include both genetic and brain-based factors. However, relationships between these systems are poorly understood, limiting understanding of disease etiology, particularly at the developmental level. Methods: We use a data-driven machine learning approach connectome-based predictive modeling (CPM) to identify functional connectivity signatures associated with polygenic risk scores for depression (DEP-PRS) among youth from the Adolescent Brain and Cognitive Development (ABCD) study across diverse brain states, i.e., during resting state, during affective working memory, during response inhibition, during reward processing. Results: Using 10-fold cross-validation with 100 iterations and permutation testing, CPM identified connectivity signatures of DEP-PRS across all examined brain states (rho’s=0.20-0.27, p’s<.001). Across brain states, DEP-PRS was positively predicted by increased connectivity between frontoparietal and salience networks, increased motor-sensory network connectivity, decreased salience to subcortical connectivity, and decreased subcortical to motor-sensory connectivity. Subsampling analyses demonstrated that model accuracies were robust across random subsamples of N’s=1,000, N’s=500, and N’s=250 but became unstable at N’s=100. Conclusions: These data, for the first time, identify neural networks of polygenic depression risk in a large sample of youth before the onset of significant clinical impairment. Identified networks may be considered potential treatment targets or vulnerability markers for depression risk.

Keywords: genetics, functional connectivity, pre-adolescents, depression

Procedia PDF Downloads 58