Search results for: finite element model/COMSOL multiphysics
9676 Improving Technical Translation Ability of the Iranian Students of Translation Through Multimedia: An Empirical Study
Authors: Dina Zakeri, Ali Aminzad
Abstract:
Multimedia-assisted teaching results in eliminating traditional training barriers, facilitating the cognition process and upgrading learning outcomes. This study attempted to examine the effects of implementing multimedia on teaching technical translation model and on the technical text translation ability of Iranian students of translation. To fulfill the purpose of the study, a total of forty-six learners were selected out of fifty-seven participants in a higher education center in Tehran based on their scores in Preliminary English Test (PET) and were divided randomly into the experimental and control groups. Prior to the treatment, a technical text translation questionnaire was devised and then approved and validated by three assistant professors of technical fields and three assistant professors of Teaching English as a Foreign Language (TEFL) at the university. This questionnaire was administered as a pretest to both groups. Control and experimental groups were trained for five successive weeks using identical course books but with a different lesson plan that allowed employing multimedia for the experimental group only. The devised and approved questionnaire was administered as a posttest to both groups at the end of the instruction. A multivariate ANOVA was run to compare the two groups’ means on the PET, pretest and posttest. The results showed the rejection of all null hypotheses of the study and revealed that multimedia significantly improved technical text translation ability of the learners.Keywords: multimedia, multimedia-mediated teaching, technical translation model, technical text, translation ability
Procedia PDF Downloads 1299675 Capillary Wave Motion and Atomization Induced by Surface Acoustic Waves under the Navier-Slip Condition at the Wall
Authors: Jaime E. Munoz, Jose C. Arcos, Oscar E. Bautista, Ivan E. Campos
Abstract:
The influence of slippage phenomenon over the destabilization and atomization mechanisms induced via surface acoustic waves on a Newtonian, millimeter-sized, drop deposited on a hydrophilic substrate is studied theoretically. By implementing the Navier-slip model and a lubrication-type approach into the equations which govern the dynamic response of a drop exposed to acoustic stress, a highly nonlinear evolution equation for the air-liquid interface is derived in terms of the acoustic capillary number and the slip coefficient. By numerically solving such an evolution equation, the Spatio-temporal deformation of the drop's free surface is obtained; in this context, atomization of the initial drop into micron-sized droplets is predicted at our numerical model once the acoustically-driven capillary waves reach a critical value: the instability length. Our results show slippage phenomenon at systems with partial and complete wetting favors the formation of capillary waves at the free surface, which traduces in a major volume of liquid being atomized in comparison to the no-slip case for a given time interval. In consequence, slippage at the wall possesses the capability to affect and improve the atomization rate for a drop exposed to a high-frequency acoustic field.Keywords: capillary instability, lubrication theory, navier-slip condition, SAW atomization
Procedia PDF Downloads 1579674 In vitro and in vivo Infectivity of Coxiella burnetii Strains from French Livestock
Authors: Joulié Aurélien, Jourdain Elsa, Bailly Xavier, Gasqui Patrick, Yang Elise, Leblond Agnès, Rousset Elodie, Sidi-Boumedine Karim
Abstract:
Q fever is a worldwide zoonosis caused by the gram-negative obligate intracellular bacterium Coxiella burnetii. Following the recent outbreaks in the Netherlands, a hyper virulent clone was found to be the cause of severe human cases of Q fever. In livestock, Q fever clinical manifestations are mainly abortions. Although the abortion rates differ between ruminant species, C. burnetii’s virulence remains understudied, especially in enzootic areas. In this study, the infectious potential of three C. burnetii isolates collected from French farms of small ruminants were compared to the reference strain Nine Mile (in phase II and in an intermediate phase) using an in vivo (CD1 mice) model. Mice were challenged with 105 live bacteria discriminated by propidium monoazide-qPCR targeting the icd-gene. After footpad inoculation, spleen and popliteal lymph node were harvested at 10 days post-inoculation (p.i). The strain invasiveness in spleen and popliteal nodes was assessed by qPCR assays targeting the icd-gene. Preliminary results showed that the avirulent strains (in phase 2) failed to pass the popliteal barrier and then to colonize the spleen. This model allowed a significant differentiation between strain’s invasiveness on biological host and therefore identifying distinct virulence profiles. In view of these results, we plan to go further by testing fifteen additional C. burnetii isolates from French farms of sheep, goat and cattle by using the above-mentioned in vivo model. All 15 strains display distant MLVA (multiple-locus variable-number of tandem repeat analysis) genotypic profiles. Five of the fifteen isolates will bee also tested in vitro on ovine and bovine macrophage cells. Cells and supernatants will be harvested at day1, day2, day3 and day6 p.i to assess in vitro multiplication kinetics of strains. In conclusion, our findings might help the implementation of surveillance of virulent strains and ultimately allow adapting prophylaxis measures in livestock farms.Keywords: Q fever, invasiveness, ruminant, virulence
Procedia PDF Downloads 3629673 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks
Authors: Mazarine Roquet, Pierre Dewallef
Abstract:
The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating
Procedia PDF Downloads 859672 Description of a Structural Health Monitoring and Control System Using Open Building Information Modeling
Authors: Wahhaj Ahmed Farooqi, Bilal Ahmad, Sandra Maritza Zambrano Bernal
Abstract:
In view of structural engineering, monitoring of structural responses over time is of great importance with respect to recent developments of construction technologies. Recently, developments of advanced computing tools have enabled researcher’s better execution of structural health monitoring (SHM) and control systems. In the last decade, building information modeling (BIM) has substantially enhanced the workflow of planning and operating engineering structures. Typically, building information can be stored and exchanged via model files that are based on the Industry Foundation Classes (IFC) standard. In this study a modeling approach for semantic modeling of SHM and control systems is integrated into the BIM methodology using the IFC standard. For validation of the modeling approach, a laboratory test structure, a four-story shear frame structure, is modeled using a conventional BIM software tool. An IFC schema extension is applied to describe information related to monitoring and control of a prototype SHM and control system installed on the laboratory test structure. The SHM and control system is described by a semantic model applying Unified Modeling Language (UML). Subsequently, the semantic model is mapped into the IFC schema. The test structure is composed of four aluminum slabs and plate-to-column connections are fully fixed. In the center of the top story, semi-active tuned liquid column damper (TLCD) is installed. The TLCD is used to reduce effects of structural responses in context of dynamic vibration and displacement. The wireless prototype SHM and control system is composed of wireless sensor nodes. For testing the SHM and control system, acceleration response is automatically recorded by the sensor nodes equipped with accelerometers and analyzed using embedded computing. As a result, SHM and control systems can be described within open BIM, dynamic responses and information of damages can be stored, documented, and exchanged on the formal basis of the IFC standard.Keywords: structural health monitoring, open building information modeling, industry foundation classes, unified modeling language, semi-active tuned liquid column damper, nondestructive testing
Procedia PDF Downloads 1549671 Multi-Modal Film Boiling Simulations on Adaptive Octree Grids
Authors: M. Wasy Akhtar
Abstract:
Multi-modal film boiling simulations are carried out on adaptive octree grids. The liquid-vapor interface is captured using the volume-of-fluid framework adjusted to account for exchanges of mass, momentum, and energy across the interface. Surface tension effects are included using a volumetric source term in the momentum equations. The phase change calculations are conducted based on the exact location and orientation of the interface; however, the source terms are calculated using the mixture variables to be consistent with the one field formulation used to represent the entire fluid domain. The numerical model on octree representation of the computational grid is first verified using test cases including advection tests in severely deforming velocity fields, gravity-based instabilities and bubble growth in uniformly superheated liquid under zero gravity. The model is then used to simulate both single and multi-modal film boiling simulations. The octree grid is dynamically adapted in order to maintain the highest grid resolution on the instability fronts using markers of interface location, volume fraction, and thermal gradients. The method thus provides an efficient platform to simulate fluid instabilities with or without phase change in the presence of body forces like gravity or shear layer instabilities.Keywords: boiling flows, dynamic octree grids, heat transfer, interface capturing, phase change
Procedia PDF Downloads 2469670 The Impact of International Financial Reporting Standards (IFRS) Adoption on Performance’s Measure: A Study of UK Companies
Authors: Javad Izadi, Sahar Majioud
Abstract:
This study presents an approach of assessing the choice of performance measures of companies in the United Kingdom after the application of IFRS in 2005. The aim of this study is to investigate the effects of IFRS on the choice of performance evaluation methods for UK companies. We analyse through an econometric model the relationship of the dependent variable, the firm’s performance, which is a nominal variable with the independent ones. Independent variables are split into two main groups: the first one is the group of accounting-based measures: Earning per share, return on assets and return on equities. The second one is the group of market-based measures: market value of property plant and equipment, research and development, sales growth, market to book value, leverage, segment and size of companies. Concerning the regression used, it is a multinomial logistic regression performed on a sample of 130 UK listed companies. Our finding shows after IFRS adoption, and companies give more importance to some variables such as return on equities and sales growth to assess their performance, whereas the return on assets and market to book value ratio does not have as much importance as before IFRS in evaluating the performance of companies. Also, there are some variables that have no impact on the performance measures anymore, such as earning per share. This article finding is empirically important for business in subjects related to IFRS and companies’ performance measurement.Keywords: performance’s Measure, nominal variable, econometric model, evaluation methods
Procedia PDF Downloads 1399669 Extension and Closure of a Field for Engineering Purpose
Authors: Shouji Yujiro, Memei Dukovic, Mist Yakubu
Abstract:
Fields are important objects of study in algebra since they provide a useful generalization of many number systems, such as the rational numbers, real numbers, and complex numbers. In particular, the usual rules of associativity, commutativity and distributivity hold. Fields also appear in many other areas of mathematics; see the examples below. When abstract algebra was first being developed, the definition of a field usually did not include commutativity of multiplication, and what we today call a field would have been called either a commutative field or a rational domain. In contemporary usage, a field is always commutative. A structure which satisfies all the properties of a field except possibly for commutativity, is today called a division ring ordivision algebra or sometimes a skew field. Also non-commutative field is still widely used. In French, fields are called corps (literally, body), generally regardless of their commutativity. When necessary, a (commutative) field is called corps commutative and a skew field-corps gauche. The German word for body is Körper and this word is used to denote fields; hence the use of the blackboard bold to denote a field. The concept of fields was first (implicitly) used to prove that there is no general formula expressing in terms of radicals the roots of a polynomial with rational coefficients of degree 5 or higher. An extension of a field k is just a field K containing k as a subfield. One distinguishes between extensions having various qualities. For example, an extension K of a field k is called algebraic, if every element of K is a root of some polynomial with coefficients in k. Otherwise, the extension is called transcendental. The aim of Galois Theory is the study of algebraic extensions of a field. Given a field k, various kinds of closures of k may be introduced. For example, the algebraic closure, the separable closure, the cyclic closure et cetera. The idea is always the same: If P is a property of fields, then a P-closure of k is a field K containing k, having property, and which is minimal in the sense that no proper subfield of K that contains k has property P. For example if we take P (K) to be the property ‘every non-constant polynomial f in K[t] has a root in K’, then a P-closure of k is just an algebraic closure of k. In general, if P-closures exist for some property P and field k, they are all isomorphic. However, there is in general no preferable isomorphism between two closures.Keywords: field theory, mechanic maths, supertech, rolltech
Procedia PDF Downloads 3759668 Potential Ecological Risk Assessment of Selected Heavy Metals in Sediments of Tidal Flat Marsh, the Case Study: Shuangtai Estuary, China
Authors: Chang-Fa Liu, Yi-Ting Wang, Yuan Liu, Hai-Feng Wei, Lei Fang, Jin Li
Abstract:
Heavy metals in sediments can cause adverse ecological effects while it exceeds a given criteria. The present study investigated sediment environmental quality, pollutant enrichment, ecological risk, and source identification for copper, cadmium, lead, zinc, mercury, and arsenic in the sediments collected from tidal flat marsh of Shuangtai estuary, China. The arithmetic mean integrated pollution index, geometric mean integrated pollution index, fuzzy integrated pollution index, and principal component score were used to characterize sediment environmental quality; fuzzy similarity and geo-accumulation Index were used to evaluate pollutant enrichment; correlation matrix, principal component analysis, and cluster analysis were used to identify source of pollution; environmental risk index and potential ecological risk index were used to assess ecological risk. The environmental qualities of sediment are classified to very low degree of contamination or low contamination. The similar order to element background of soil in the Liaohe plain is region of Sanjiaozhou, Honghaitan, Sandaogou, Xiaohe by pollutant enrichment analysis. The source identification indicates that correlations are significantly among metals except between copper and cadmium. Cadmium, lead, zinc, mercury, and arsenic will be clustered in the same clustering as the first principal component. Copper will be clustered as second principal component. The environmental risk assessment level will be scaled to no risk in the studied area. The order of potential ecological risk is As > Cd > Hg > Cu > Pb > Zn.Keywords: ecological risk assessment, heavy metals, sediment, marsh, Shuangtai estuary
Procedia PDF Downloads 3509667 An Application of Vector Error Correction Model to Assess Financial Innovation Impact on Economic Growth of Bangladesh
Authors: Md. Qamruzzaman, Wei Jianguo
Abstract:
Over the decade, it is observed that financial development, through financial innovation, not only accelerated development of efficient and effective financial system but also act as a catalyst in the economic development process. In this study, we try to explore insight about how financial innovation causes economic growth in Bangladesh by using Vector Error Correction Model (VECM) for the period of 1990-2014. Test of Cointegration confirms the existence of a long-run association between financial innovation and economic growth. For investigating directional causality, we apply Granger causality test and estimation explore that long-run growth will be affected by capital flow from non-bank financial institutions and inflation in the economy but changes of growth rate do not have any impact on Capital flow in the economy and level of inflation in long-run. Whereas, growth and Market capitalization, as well as market capitalization and capital flow, confirm feedback hypothesis. Variance decomposition suggests that any innovation in the financial sector can cause GDP variation fluctuation in both long run and short run. Financial innovation promotes efficiency and cost in financial transactions in the financial system, can boost economic development process. The study proposed two policy recommendations for further development. First, innovation friendly financial policy should formulate to encourage adaption and diffusion of financial innovation in the financial system. Second, operation of financial market and capital market should be regulated with implementation of rules and regulation to create conducive environment.Keywords: financial innovation, economic growth, GDP, financial institution, VECM
Procedia PDF Downloads 2729666 Latent Factors of Severity in Truck-Involved and Non-Truck-Involved Crashes on Freeways
Authors: Shin-Hyung Cho, Dong-Kyu Kim, Seung-Young Kho
Abstract:
Truck-involved crashes have higher crash severity than non-truck-involved crashes. There have been many studies about the frequency of crashes and the development of severity models, but those studies only analyzed the relationship between observed variables. To identify why more people are injured or killed when trucks are involved in the crash, we must examine to quantify the complex causal relationship between severity of the crash and risk factors by adopting the latent factors of crashes. The aim of this study was to develop a structural equation or model based on truck-involved and non-truck-involved crashes, including five latent variables, i.e. a crash factor, environmental factor, road factor, driver’s factor, and severity factor. To clarify the unique characteristics of truck-involved crashes compared to non-truck-involved crashes, a confirmatory analysis method was used. To develop the model, we extracted crash data from 10,083 crashes on Korean freeways from 2008 through 2014. The results showed that the most significant variable affecting the severity of a crash is the crash factor, which can be expressed by the location, cause, and type of the crash. For non-truck-involved crashes, the crash and environment factors increase severity of the crash; conversely, the road and driver factors tend to reduce severity of the crash. For truck-involved crashes, the driver factor has a significant effect on severity of the crash although its effect is slightly less than the crash factor. The multiple group analysis employed to analyze the differences between the heterogeneous groups of drivers.Keywords: crash severity, structural structural equation modeling (SEM), truck-involved crashes, multiple group analysis, crash on freeway
Procedia PDF Downloads 3849665 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines
Authors: P. Byrnes, F. A. DiazDelaO
Abstract:
The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines
Procedia PDF Downloads 2229664 Productivity-Emotiveness Model of School Students’ Capacity Levels
Authors: Ivan Samokhin
Abstract:
A new two-factor model of school students’ capacity levels is proposed. It considers the academic productivity and emotional condition of children taking part in the study process. Each basic level reflects the correlation of these two factors. The teacher decides whether the required result is achieved or not and write down the grade (from 'A' to 'F') in the register. During the term, the teacher can estimate the students’ progress with any intervals, but it is not desirable to exceed a two-week period (with primary school being an exception). Each boy or girl should have a special notebook to record the emotions which they feel studying a subject. The children can make their notes the way they like it – for example, using a ten-point scale or a short verbal description. It is recommended to record the emotions twice a day: after the lesson and after doing the homework. Before the students start doing this, they should be instructed by a school psychologist, who has to emphasize that an attitude to the subject – not to a person in charge of it – is relevant. At the end of the term, the notebooks are given to the teacher, who is now able to make preliminary conclusions about academic results and psychological comfort of each student. If necessary, some pedagogical measures can be taken. The data about a supposed capacity level is available for the teacher and the school administration. In certain cases, this information can be also revealed to the student’s parents, while the student learns it only after receiving a school-leaving certificate (until this moment, the results are not considered ultimate). Then a person may take these data into consideration when choosing his/her future area of higher education. We single out four main capacity levels: 'nominally low', 'inclination', 'ability' and 'gift'.Keywords: academic productivity, capacity level, emotional condition, school students
Procedia PDF Downloads 2269663 Port Logistics Integration: Challenges and Approaches: Case Study; Iranian Seaports
Authors: Ali Alavi, Hong-Oanh Nguyen, Jiangang Fei, Jafar Sayareh
Abstract:
The recent competitive market in the port sector highly depend on logistics practices, functions and activities and seaports play a key role in port logistics chains. Despite the well-articulated importance of ports and terminals in integrated logistics, the role of success factors in port logistics integration has been rarely mentioned. The objective of this paper is to fill this gap in the literature and provide an insight into how seaports and terminals may improve their logistics integration. First, a literature review of studies on logistics integration in seaports and terminals is conducted. Second, a new conceptual framework for port logistics integration is proposed to incorporate the role of the new variables emerging from the recent developments in the global business environment. Third, the model tested in Iranian port and maritime sector using self-administered and online survey among logistics chain actors in Iranian seaports such shipping line operators, logistics service providers, port authorities, logistics companies and other related actors. The results have found the logistics process and operations, information integration, value-added services, and logistics practices being influential to logistics integration. A proposed conceptual framework is developed to extend the existing framework and incorporates the variables namely organizational activities, resource sharing, and institutional support. Further examination of the proposed model across multiple contexts is necessary for the validity of the findings. The framework could be more detailed on each factor and consider actors perspective.Keywords: maritime logistics, port integration, logistics integration, supply chain integration
Procedia PDF Downloads 2509662 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece
Authors: Panagiotis Karadimos, Leonidas Anthopoulos
Abstract:
Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA
Procedia PDF Downloads 1369661 Viscoelastic Behavior of Human Bone Tissue under Nanoindentation Tests
Authors: Anna Makuch, Grzegorz Kokot, Konstanty Skalski, Jakub Banczorowski
Abstract:
Cancellous bone is a porous composite of a hierarchical structure and anisotropic properties. The biological tissue is considered to be a viscoelastic material, but many studies based on a nanoindentation method have focused on their elasticity and microhardness. However, the response of many organic materials depends not only on the load magnitude, but also on its duration and time course. Depth Sensing Indentation (DSI) technique has been used for examination of creep in polymers, metals and composites. In the indentation tests on biological samples, the mechanical properties are most frequently determined for animal tissues (of an ox, a monkey, a pig, a rat, a mouse, a bovine). However, there are rare reports of studies of the bone viscoelastic properties on microstructural level. Various rheological models were used to describe the viscoelastic behaviours of bone, identified in the indentation process (e. g Burgers model, linear model, two-dashpot Kelvin model, Maxwell-Voigt model). The goal of the study was to determine the influence of creep effect on the mechanical properties of human cancellous bone in indentation tests. The aim of this research was also the assessment of the material properties of bone structures, having in mind the energy aspects of the curve (penetrator loading-depth) obtained in the loading/unloading cycle. There was considered how the different holding times affected the results within trabecular bone.As a result, indentation creep (CIT), hardness (HM, HIT, HV) and elasticity are obtained. Human trabecular bone samples (n=21; mean age 63±15yrs) from the femoral heads replaced during hip alloplasty were removed and drained from alcohol of 1h before the experiment. The indentation process was conducted using CSM Microhardness Tester equipped with Vickers indenter. Each sample was indented 35 times (7 times for 5 different hold times: t1=0.1s, t2=1s, t3=10s, t4=100s and t5=1000s). The indenter was advanced at a rate of 10mN/s to 500mN. There was used Oliver-Pharr method in calculation process. The increase of hold time is associated with the decrease of hardness parameters (HIT(t1)=418±34 MPa, HIT(t2)=390±50 MPa, HIT(t3)= 313±54 MPa, HIT(t4)=305±54 MPa, HIT(t5)=276±90 MPa) and elasticity (EIT(t1)=7.7±1.2 GPa, EIT(t2)=8.0±1.5 GPa, EIT(t3)=7.0±0.9 GPa, EIT(t4)=7.2±0.9 GPa, EIT(t5)=6.2±1.8 GPa) as well as with the increase of the elastic (Welastic(t1)=4.11∙10-7±4.2∙10-8Nm, Welastic(t2)= 4.12∙10-7±6.4∙10-8 Nm, Welastic(t3)=4.71∙10-7±6.0∙10-9 Nm, Welastic(t4)= 4.33∙10-7±5.5∙10-9Nm, Welastic(t5)=5.11∙10-7±7.4∙10-8Nm) and inelastic (Winelastic(t1)=1.05∙10-6±1.2∙10-7 Nm, Winelastic(t2) =1.07∙10-6±7.6∙10-8 Nm, Winelastic(t3)=1.26∙10-6±1.9∙10-7Nm, Winelastic(t4)=1.56∙10-6± 1.9∙10-7 Nm, Winelastic(t5)=1.67∙10-6±2.6∙10-7)) reaction of materials. The indentation creep increased logarithmically (R2=0.901) with increasing hold time: CIT(t1) = 0.08±0.01%, CIT(t2) = 0.7±0.1%, CIT(t3) = 3.7±0.3%, CIT(t4) = 12.2±1.5%, CIT(t5) = 13.5±3.8%. The pronounced impact of creep effect on the mechanical properties of human cancellous bone was observed in experimental studies. While the description elastic-inelastic, and thus the Oliver-Pharr method for data analysis, may apply in few limited cases, most biological tissues do not exhibit elastic-inelastic indentation responses. Viscoelastic properties of tissues may play a significant role in remodelling. The aspect is still under an analysis and numerical simulations. Acknowledgements: The presented results are part of the research project founded by National Science Centre (NCN), Poland, no.2014/15/B/ST7/03244.Keywords: bone, creep, indentation, mechanical properties
Procedia PDF Downloads 1729660 Modelling Biological Treatment of Dye Wastewater in SBR Systems Inoculated with Bacteria by Artificial Neural Network
Authors: Yasaman Sanayei, Alireza Bahiraie
Abstract:
This paper presents a systematic methodology based on the application of artificial neural networks for sequencing batch reactor (SBR). The SBR is a fill-and-draw biological wastewater technology, which is specially suited for nutrient removal. Employing reactive dye by Sphingomonas paucimobilis bacteria at sequence batch reactor is a novel approach of dye removal. The influent COD, MLVSS, and reaction time were selected as the process inputs and the effluent COD and BOD as the process outputs. The best possible result for the discrete pole parameter was a= 0.44. In orderto adjust the parameters of ANN, the Levenberg-Marquardt (LM) algorithm was employed. The results predicted by the model were compared to the experimental data and showed a high correlation with R2> 0.99 and a low mean absolute error (MAE). The results from this study reveal that the developed model is accurate and efficacious in predicting COD and BOD parameters of the dye-containing wastewater treated by SBR. The proposed modeling approach can be applied to other industrial wastewater treatment systems to predict effluent characteristics. Note that SBR are normally operated with constant predefined duration of the stages, thus, resulting in low efficient operation. Data obtained from the on-line electronic sensors installed in the SBR and from the control quality laboratory analysis have been used to develop the optimal architecture of two different ANN. The results have shown that the developed models can be used as efficient and cost-effective predictive tools for the system analysed.Keywords: artificial neural network, COD removal, SBR, Sphingomonas paucimobilis
Procedia PDF Downloads 4159659 A Methodology for the Synthesis of Multi-Processors
Authors: Hamid Yasinian
Abstract:
Random epistemologies and hash tables have garnered minimal interest from both security experts and experts in the last several years. In fact, few information theorists would disagree with the evaluation of expert systems. In our research, we discover how flip-flop gates can be applied to the study of superpages. Though such a hypothesis at first glance seems perverse, it is derived from known results.Keywords: synthesis, multi-processors, interactive model, moor’s law
Procedia PDF Downloads 4379658 Coastalization and Urban Sprawl in the Mediterranean: Using High-Resolution Multi-Temporal Data to Identify Typologies of Spatial Development
Authors: Apostolos Lagarias, Anastasia Stratigea
Abstract:
Coastal urbanization is heavily affecting the Mediterranean, taking the form of linear urban sprawl along the coastal zone. This process is posing extreme pressure on ecosystems, leading to an unsustainable model of growth. The aim of this research is to analyze coastal urbanization patterns in the Mediterranean using High-resolution multi-temporal data provided by the Global Human Settlement Layer (GHSL) database. Methodology involves the estimation of a set of spatial metrics characterizing the density, aggregation/clustering and dispersion of built-up areas. As case study areas, the Spanish Coast and the Adriatic Italian Coast are examined. Coastalization profiles are examined and selected sub-areas massively affected by tourism development and suburbanization trends (Costa Blanca/Murcia, Costa del Sol, Puglia, Emilia-Romagna Coast) are analyzed and compared. Results show that there are considerable differences between the Spanish and the Italian typologies of spatial development, related to the land use structure and planning policies applied in each case. Monitoring and analyzing spatial patterns could inform integrated Mediterranean strategies for coastal areas and redirect spatial/environmental policies towards a more sustainable model of growthKeywords: coastalization, Mediterranean, multi-temporal, urban sprawl, spatial metrics
Procedia PDF Downloads 1419657 Color-Based Emotion Regulation Model: An Affective E-Learning Environment
Authors: Sabahat Nadeem, Farman Ali Khan
Abstract:
Emotions are considered as a vital factor affecting the process of information handling, level of attention, memory capacity and decision making. Latest e-Learning systems are therefore taking into consideration the effective state of learners to make the learning process more effective and enjoyable. One such use of user’s affective information is in the systems that tend to regulate users’ emotions to a state optimally desirable for learning. So for, this objective has been tried to be achieved with the help of teaching strategies, background music, guided imagery, video clips and odors. Nevertheless, we know that colors can affect human emotions. Relationship between color and emotions has a strong influence on how we perceive our environment. Similarly, the colors of the interface can also affect the user positively as well as negatively. This affective behavior of color and its use as emotion regulation agent is not yet exploited. Therefore, this research proposes a Color-based Emotion Regulation Model (CERM), a new framework that can automatically adapt its colors according to user’s emotional state and her personality type and can help in producing a desirable emotional effect, aiming at providing an unobtrusive emotional support to the users of e-learning environment. The evaluation of CERM is carried out by comparing it with classical non-adaptive, static colored learning management system. Results indicate that colors of the interface, when carefully selected has significant positive impact on learner’s emotions.Keywords: effective learning, e-learning, emotion regulation, emotional design
Procedia PDF Downloads 3079656 Orbit Determination Modeling with Graphical Demonstration
Authors: Assem M. F. Sallam, Ah. El-S. Makled
Abstract:
In this paper, there is an implementation, verification, and graphical demonstration of a software application, which can be used swiftly over different preliminary orbit determination methods. A passive orbit determination method is used in this study to determine the location of a satellite or a flying body. It is named a passive orbit determination because it depends on observation without the use of any aids (radio and laser) installed on satellite. In order to understand how these methods work and how their output is accurate when compared with available verification data, the built models help in knowing the different inputs used with each method. Output from the different orbit determination methods (Gibbs, Lambert, and Gauss) will be compared with each other and verified by the data obtained from Satellite Tool Kit (STK) application. A modified model including all of the orbit determination methods using the same input will be introduced to investigate different models output (orbital parameters) for the same input (azimuth, elevation, and time). Simulation software is implemented using MATLAB. A Graphical User Interface (GUI) application named OrDet is produced using the GUI of MATLAB. It includes all the available used inputs and it outputs the current Classical Orbital Elements (COE) of satellite under observation. Produced COE are then used to propagate for a complete revolution and plotted on a 3-D view. Modified model which uses an adapter to allow same input parameters, passes these parameters to the preliminary orbit determination methods under study. Result from all orbit determination methods yield exactly the same COE output, which shows the equality of concept in determination of satellite’s location, but with different numerical methods.Keywords: orbit determination, STK, Matlab-GUI, satellite tracking
Procedia PDF Downloads 2839655 Comparative Study of Vertical and Horizontal Triplex Tube Latent Heat Storage Units
Authors: Hamid El Qarnia
Abstract:
This study investigates the impact of the eccentricity of the central tube on the thermal and fluid characteristics of a triplex tube used in latent heat energy storage technologies. Two triplex tube orientations are considered in the proposed study: vertical and horizontal. The energy storage material, which is a phase change material (PCM), is placed in the space between the inside and outside tubes. During the thermal energy storage period, a heat transfer fluid (HTF) flows inside the two tubes, transmitting the heat to the PCM through two heat exchange surfaces instead of one heat exchange surface as it is the case for double tube heat storage systems. A CFD model is developed and validated against experimental data available in the literature. The mesh independency study is carried out to select the appropriate mesh. In addition, different time steps are examined to determine a time step ensuring accuracy of the numerical results and reduction in the computational time. The numerical model is then used to conduct numerical investigations of the thermal behavior and thermal performance of the storage unit. The effects of eccentricity of the central tube and HTF mass flow rate on thermal characteristics and performance indicators are examined for two flow arrangements: co-current and counter current flows. The results are given in terms of isotherm plots, streamlines, melting time and thermal energy storage efficiency.Keywords: energy storage, heat transfer, melting, solidification
Procedia PDF Downloads 569654 Supply Chain Network Design for Perishable Products in Developing Countries
Authors: Abhishek Jain, Kavish Kejriwal, V. Balaji Rao, Abhigna Chavda
Abstract:
Increasing environmental and social concerns are forcing companies to take a fresh view of the impact of supply chain operations on environment and society when designing a supply chain. A challenging task in today’s food industry is the distribution of high-quality food items throughout the food supply chain. Improper storage and unwanted transportation are the major hurdles in food supply chain and can be tackled by making dynamic storage facility location decisions with the distribution network. Since food supply chain in India is one of the biggest supply chains in the world, the companies should also consider environmental impact caused by the supply chain. This project proposes a multi-objective optimization model by integrating sustainability in decision-making, on distribution in a food supply chain network (SCN). A Multi-Objective Mixed-Integer Linear Programming (MOMILP) model between overall cost and environmental impact caused by the SCN is formulated for the problem. The goal of MOMILP is to determine the pareto solutions for overall cost and environmental impact caused by the supply chain. This is solved by using GAMS with CPLEX as third party solver. The outcomes of the project are pareto solutions for overall cost and environmental impact, facilities to be operated and the amount to be transferred to each warehouse during the time horizon.Keywords: multi-objective mixed linear programming, food supply chain network, GAMS, multi-product, multi-period, environment
Procedia PDF Downloads 3219653 Docking, Pharmacophore Modeling and 3d QSAR Studies on Some Novel HDAC Inhibitors with Heterocyclic Linker
Authors: Harish Rajak, Preeti Patel
Abstract:
The application of histone deacetylase inhibitors is a well-known strategy in prevention of cancer which shows acceptable preclinical antitumor activity due to its ability of growth inhibition and apoptosis induction of cancer cell. Molecular docking were performed using Histone Deacetylase protein (PDB ID:1t69) and prepared series of hydroxamic acid based HDACIs. On the basis of docking study, it was predicted that compound 1 has significant binding interaction with HDAC protein and three hydrogen bond interactions takes place, which are essential for antitumor activity. On docking, most of the compounds exhibited better glide score values between -8 to -10 which is close to the glide score value of suberoylanilide hydroxamic acid. The pharmacophore hypotheses were developed using e-pharmacophore script and phase module. The 3D-QSAR models provided a good correlation between predicted and actual anticancer activity. Best QSAR model showed Q2 (0.7974), R2 (0.9200) and standard deviation (0.2308). QSAR visualization maps suggest that hydrogen bond acceptor groups at carbonyl group of cap region and hydrophobic groups at ortho, meta, para position of R9 were favorable for HDAC inhibitory activity. We established structure activity correlation using docking, pharmacophore modeling and atom based 3D QSAR model for hydroxamic acid based HDACIs.Keywords: HDACIs, QSAR, e-pharmacophore, docking, suberoylanilide hydroxamic acid
Procedia PDF Downloads 3029652 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor
Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar
Abstract:
Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration
Procedia PDF Downloads 1919651 Non-Methane Hydrocarbons Emission during the Photocopying Process
Authors: Kiurski S. Jelena, Aksentijević M. Snežana, Kecić S. Vesna, Oros B. Ivana
Abstract:
The prosperity of electronic equipment in photocopying environment not only has improved work efficiency, but also has changed indoor air quality. Considering the number of photocopying employed, indoor air quality might be worse than in general office environments. Determining the contribution from any type of equipment to indoor air pollution is a complex matter. Non-methane hydrocarbons are known to have an important role of air quality due to their high reactivity. The presence of hazardous pollutants in indoor air has been detected in one photocopying shop in Novi Sad, Serbia. Air samples were collected and analyzed for five days, during 8-hr working time in three-time intervals, whereas three different sampling points were determined. Using multiple linear regression model and software package STATISTICA 10 the concentrations of occupational hazards and micro-climates parameters were mutually correlated. Based on the obtained multiple coefficients of determination (0.3751, 0.2389, and 0.1975), a weak positive correlation between the observed variables was determined. Small values of parameter F indicated that there was no statistically significant difference between the concentration levels of non-methane hydrocarbons and micro-climates parameters. The results showed that variable could be presented by the general regression model: y = b0 + b1xi1+ b2xi2. Obtained regression equations allow to measure the quantitative agreement between the variation of variables and thus obtain more accurate knowledge of their mutual relations.Keywords: non-methane hydrocarbons, photocopying process, multiple regression analysis, indoor air quality, pollutant emission
Procedia PDF Downloads 3789650 Identifying Physical and Psycho-Social Issues Facing Breast Cancer Survivors after Definitive Treatment for Early Breast Cancer: A Nurse-Led Clinic Model
Authors: A. Dean, M. Pitcher, L. Storer, K. Shanahan, I. Rio, B. Mann
Abstract:
Purpose: Breast cancer survivors are at risk of specific physical and psycho-social issues, such as arm swelling, fatigue, and depression. Firstly, we investigate symptoms reported by Australia breast cancer survivors upon completion of definitive treatment. Secondly, we evaluate the appropriateness and effectiveness of a multi-centre pilot program nurse-led clinic to identify these issues and make timely referrals to available services. Methods: Patients post-definitive treatment (excluding ongoing hormonal therapy) for early breast cancer or ductal carcinoma in situ were invited to participate. An hour long appointment with a breast care nurse (BCN) was scheduled. In preparation, patients completed validated quality-of-life surveys (FACT-B, Menopause Rating Scale, Distress Thermometer). During the appointment, issues identified in the surveys were addressed and referrals to appropriate services arranged. Results: 183 of 274 (67%) eligible patients attended a nurse-led clinic. Mean age 56.8 years (range 29-87 years), 181/183 women, 105/183 post-menopausal. 96 (55%) participants reported significant level of distress; 31 (18%) participants reported extreme distress or depression. Distress stemmed from a lack of energy (56/175); poor quality of sleep (50/176); inability to work or participate in household activities (35/172) and problems with sex life (28/89). 166 referrals were offered; 94% of patients accepted the referrals. 65% responded to a follow-up survey: the majority of women either strongly agreed or agreed that the BCN was overwhelmingly supportive, helpful in making referrals, and compassionate towards them. 39% reported making lifestyle changes as a result of the BCN. Conclusion: Breast cancer survivors experience a unique set of challenges, including low mood, difficulty sleeping, problems with sex life and fear of disease recurrence. The nurse-led clinic model is an appropriate and effective method to ensure physical and psycho-social issues are identified and managed in a timely manner. This model empowers breast cancer survivors with information about their diagnosis and available services.Keywords: early breast cancer, survivorship, breast care nursing, oncology nursing and cancer care
Procedia PDF Downloads 4009649 Concept of the Active Flipped Learning in Engineering Mechanics
Authors: Lin Li, Farshad Amini
Abstract:
The flipped classroom has been introduced to promote collaborative learning and higher-order learning objectives. In contrast to the traditional classroom, the flipped classroom has students watch prerecorded lecture videos before coming to class and then “class becomes the place to work through problems, advance concepts, and engage in collaborative learning”. In this paper, the active flipped learning combines flipped classroom with active learning that is to establish an active flipped learning (AFL) model, aiming to promote active learning, stress deep learning, encourage student engagement and highlight data-driven personalized learning. Because students have watched the lecture prior to class, contact hours can be devoted to problem-solving and gain a deeper understanding of the subject matter. The instructor is able to provide students with a wide range of learner-centered opportunities in class for greater mentoring and collaboration, increasing the possibility to engage students. Currently, little is known about the extent to which AFL improves engineering students’ performance. This paper presents the preliminary study on the core course of sophomore students in Engineering Mechanics. A series of survey and interviews have been conducted to compare students’ learning engagement, empowerment, self-efficacy, and satisfaction with the AFL. It was found that the AFL model taking advantage of advanced technology is a convenient and professional avenue for engineering students to strengthen their academic confidence and self-efficacy in the Engineering Mechanics by actively participating in learning and fostering their deep understanding of engineering statics and dynamicsKeywords: active learning, engineering mechanics, flipped classroom, performance
Procedia PDF Downloads 2949648 Bridge Health Monitoring: A Review
Authors: Mohammad Bakhshandeh
Abstract:
Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.Keywords: structural health monitoring (SHM), bridge health monitoring (BHM), sensor-based methods, machine-learning algorithms, and model-based techniques, sensor placement, data acquisition, data analysis
Procedia PDF Downloads 909647 Nitric Oxide and Potassium Channels but Not Opioid and Cannabinoid Receptors Mediate Tramadol-Induced Peripheral Antinociception in Rat Model of Paw Pressure Withdrawal
Authors: Raquel R. Soares-Santos, Daniel P. Machado, Thiago L. Romero, Igor D. G. Duarte
Abstract:
Tramadol, an analgesic classified as an 'atypical opioid,' exhibits both opioid and non-opioid mechanisms of action. This study aimed to explore these mechanisms, specifically the opioid-, cannabinoid-, nitric oxide-, and potassium channel-based mechanisms, which contribute to the peripheral antinociception effect of tramadol, in an experimental rat model. The nociceptive threshold was determined using paw pressure withdrawal. To examine the mechanisms of action, several substances were administered intraplantarly: naloxone, a non-selective opioid antagonist (50 μg/paw); AM251 (80 μg/paw) and AM630 (100 μg/paw) as the selective antagonists for type 1 and type 2 cannabinoid receptors, respectively; nitric oxide synthase inhibitors L-NOArg, L-NIO, L-NPA, and L-NIL (24 μg/paw); and the enzyme inhibitors of guanylatocyclase and phosphodiesterase of cGMP, ODQ and zaprinast. Additionally, potassium channel blockers glibenclamide, tetraethylammonium, dequalinium, and paxillin were used. The results showed that opioid and cannabinoid receptor antagonists did not reverse tramadol’s effects. L-NOarg, L-NIO, and L-NPA partially reversed antinociception, while ODQ completely reversed, and zaprinast enhanced tramadol’s antinociception effect. Notably, glibenclamide blocked tramadol’s antinociception in a dose-dependent manner. These findings suggest that tramadol’s peripheral antinociception effect is likely mediated by the nitrergic pathway and sensitive ATP potassium channels, rather than the opioid and cannabinoid pathways.Keywords: tramadol, nitric oxide, potassium channels, peripheral analgesia, opioid
Procedia PDF Downloads 14