Search results for: goal-based modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1802

Search results for: goal-based modelling

212 Character Development Outcomes: A Predictive Model for Behaviour Analysis in Tertiary Institutions

Authors: Rhoda N. Kayongo

Abstract:

As behavior analysts in education continue to debate on how higher institutions can continue to benefit from their social and academic related programs, higher education is facing challenges in the area of character development. This is manifested in the percentages of college completion rates, teen pregnancies, drug abuse, sexual abuse, suicide, plagiarism, lack of academic integrity, and violence among their students. Attending college is a perceived opportunity to positively influence the actions and behaviors of the next generation of society; thus colleges and universities have to provide opportunities to develop students’ values and behaviors. Prior studies were mainly conducted in private institutions and more so in developed countries. However, with the complexity of the nature of student body currently due to the changing world, a multidimensional approach combining multiple factors that enhance character development outcomes is needed to suit the changing trends. The main purpose of this study was to identify opportunities in colleges and develop a model for predicting character development outcomes. A survey questionnaire composed of 7 scales including in-classroom interaction, out-of-classroom interaction, school climate, personal lifestyle, home environment, and peer influence as independent variables and character development outcomes as the dependent variable was administered to a total of five hundred and one students of 3rd and 4th year level in selected public colleges and universities in the Philippines and Rwanda. Using structural equation modelling, a predictive model explained 57% of the variance in character development outcomes. Findings from the results of the analysis showed that in-classroom interactions have a substantial direct influence on character development outcomes of the students (r = .75, p < .05). In addition, out-of-classroom interaction, school climate, and home environment contributed to students’ character development outcomes but in an indirect way. The study concluded that in the classroom are many opportunities for teachers to teach, model and integrate character development among their students. Thus, suggestions are made to public colleges and universities to deliberately boost and implement experiences that cultivate character within the classroom. These may contribute tremendously to the students' character development outcomes and hence render effective models of behaviour analysis in higher education.

Keywords: character development, tertiary institutions, predictive model, behavior analysis

Procedia PDF Downloads 136
211 Sustainability Assessment Tool for the Selection of Optimal Site Remediation Technologies for Contaminated Gasoline Sites

Authors: Connor Dunlop, Bassim Abbassi, Richard G. Zytner

Abstract:

Life cycle assessment (LCA) is a powerful tool established by the International Organization for Standardization (ISO) that can be used to assess the environmental impacts of a product or process from cradle to grave. Many studies utilize the LCA methodology within the site remediation field to compare various decontamination methods, including bioremediation, soil vapor extraction or excavation, and off-site disposal. However, with the authors' best knowledge, limited information is available in the literature on a sustainability tool that could be used to help with the selection of the optimal remediation technology. This tool, based on the LCA methodology, would consider site conditions like environmental, economic, and social impacts. Accordingly, this project was undertaken to develop a tool to assist with the selection of optimal sustainable technology. Developing a proper tool requires a large amount of data. As such, data was collected from previous LCA studies looking at site remediation technologies. This step identified knowledge gaps or limitations within project data. Next, utilizing the data obtained from the literature review and other organizations, an extensive LCA study is being completed following the ISO 14040 requirements. Initial technologies being compared include bioremediation, excavation with off-site disposal, and a no-remediation option for a generic gasoline-contaminated site. To complete the LCA study, the modelling software SimaPro is being utilized. A sensitivity analysis of the LCA results will also be incorporated to evaluate the impact on the overall results. Finally, the economic and social impacts associated with each option will then be reviewed to understand how they fluctuate at different sites. All the results will then be summarized, and an interactive tool using Excel will be developed to help select the best sustainable site remediation technology. Preliminary LCA results show improved sustainability for the decontamination of a gasoline-contaminated site for each technology compared to the no-remediation option. Sensitivity analyses are now being completed on on-site parameters to determine how the environmental impacts fluctuate at other contaminated gasoline locations as the parameters vary, including soil type and transportation distances. Additionally, the social improvements and overall economic costs associated with each technology are being reviewed. Utilizing these results, the sustainability tool created to assist in the selection of the overall best option will be refined.

Keywords: life cycle assessment, site remediation, sustainability tool, contaminated sites

Procedia PDF Downloads 58
210 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study

Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu

Abstract:

Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.

Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm

Procedia PDF Downloads 137
209 Prediction of Fluid Induced Deformation using Cavity Expansion Theory

Authors: Jithin S. Kumar, Ramesh Kannan Kandasami

Abstract:

Geomaterials are generally porous in nature due to the presence of discrete particles and interconnected voids. The porosity present in these geomaterials play a critical role in many engineering applications such as CO2 sequestration, well bore strengthening, enhanced oil and hydrocarbon recovery, hydraulic fracturing, and subsurface waste storage. These applications involves solid-fluid interactions, which govern the changes in the porosity which in turn affect the permeability and stiffness of the medium. Injecting fluid into the geomaterials results in permeation which exhibits small or negligible deformation of the soil skeleton followed by cavity expansion/ fingering/ fracturing (different forms of instabilities) due to the large deformation especially when the flow rate is greater than the ability of the medium to permeate the fluid. The complexity of this problem increases as the geomaterial behaves like a solid and fluid under certain conditions. Thus it is important to understand this multiphysics problem where in addition to the permeation, the elastic-plastic deformation of the soil skeleton plays a vital role during fluid injection. The phenomenon of permeation and cavity expansion in porous medium has been studied independently through extensive experimental and analytical/ numerical models. The analytical models generally use Darcy's/ diffusion equations to capture the fluid flow during permeation while elastic-plastic (Mohr-Coulomb and Modified Cam-Clay) models were used to predict the solid deformations. Hitherto, the research generally focused on modelling cavity expansion without considering the effect of injected fluid coming into the medium. Very few studies have considered the effect of injected fluid on the deformation of soil skeleton. However, the porosity changes during the fluid injection and coupled elastic-plastic deformation are not clearly understood. In this study, the phenomenon of permeation and instabilities such as cavity and finger/ fracture formation will be quantified extensively by performing experiments using a novel experimental setup in addition to utilizing image processing techniques. This experimental study will describe the fluid flow and soil deformation characteristics under different boundary conditions. Further, a well refined coupled semi-analytical model will be developed to capture the physics involved in quantifying the deformation behaviour of geomaterial during fluid injection.

Keywords: solid-fluid interaction, permeation, poroelasticity, plasticity, continuum model

Procedia PDF Downloads 74
208 Study on Natural Light Distribution Inside the Room by Using Sudare as an Outside Horizontal Blind in Tropical Country of Indonesia

Authors: Agus Hariyadi, Hiroatsu Fukuda

Abstract:

In tropical country like Indonesia, especially in Jakarta, most of the energy consumption on building is for the cooling system, the second one is from lighting electric consumption. One of the passive design strategy that can be done is optimizing the use of natural light from the sun. In this area, natural light is always available almost every day around the year. Natural light have many effect on building. It can reduce the need of electrical lighting but also increase the external load. Another thing that have to be considered in the use of natural light is the visual comfort from occupant inside the room. To optimize the effectiveness of natural light need some modification of façade design. By using external shading device, it can minimize the external load that introduces into the room, especially from direct solar radiation which is the 80 % of the external energy load that introduces into the building. It also can control the distribution of natural light inside the room and minimize glare in the perimeter zone of the room. One of the horizontal blind that can be used for that purpose is Sudare. It is traditional Japanese blind that have been used long time in Japanese traditional house especially in summer. In its original function, Sudare is used to prevent direct solar radiation but still introducing natural ventilation. It has some physical characteristics that can be utilize to optimize the effectiveness of natural light. In this research, different scale of Sudare will be simulated using EnergyPlus and DAYSIM simulation software. EnergyPlus is a whole building energy simulation program to model both energy consumption—for heating, cooling, ventilation, lighting, and plug and process loads—and water use in buildings, while DAYSIM is a validated, RADIANCE-based daylighting analysis software that models the annual amount of daylight in and around buildings. The modelling will be done in Ladybug and Honeybee plugin. These are two open source plugins for Grasshopper and Rhinoceros 3D that help explore and evaluate environmental performance which will directly be connected to EnergyPlus and DAYSIM engines. Using the same model will maintain the consistency of the same geometry used both in EnergyPlus and DAYSIM. The aims of this research is to find the best configuration of façade design which can reduce the external load from the outside of the building to minimize the need of energy for cooling system but maintain the natural light distribution inside the room to maximize the visual comfort for occupant and minimize the need of electrical energy consumption.

Keywords: façade, natural light, blind, energy

Procedia PDF Downloads 345
207 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture

Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani

Abstract:

The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated.  Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.

Keywords: carbon capture and storage, oxy-combustion, netpower cycle, oxy turbine cycles, zero emission, heat exchanger design, supercritical carbon dioxide, oxy-fuel power plant, pinch point analysis

Procedia PDF Downloads 204
206 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables

Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner

Abstract:

High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)

Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line

Procedia PDF Downloads 173
205 The Impact of Research Anxiety on Research Orientation and Interest in Research Courses in Social Work Students

Authors: Daniel Gredig, Annabelle Bartelsen-Raemy

Abstract:

Social work professionals should underpin their decisions with scientific knowledge and research findings. Hence, research is used as a framework for social work education and research courses have become a taken-for-granted component of study programmes. However, it has been acknowledged that social work students have negative beliefs and attitudes as well as frequently feelings of fear of research courses. Against this background, the present study aimed to establish the relationship between student’s fear of research courses, their research orientation and interest in research courses. We hypothesized that fear predicts the interest in research courses. Further, we hypothesized that research orientation (perceived importance and attributed usefulness for research for social work practice and perceived unbiased nature of research) was a mediating variable. In the years 2014, 2015 and 2016, we invited students enrolled for a bachelor programme in social work in Switzerland to participate in the study during their introduction day to the school taking place two weeks before their programme started. For data collection, we used an anonymous self-administered on-line questionnaire filled in on site. Data were analysed using descriptive statistics and structural equation modelling (generalized least squares estimates method). The sample included 708 students enrolled in a social work bachelor-programme, 501 being female, 184 male, and 5 intersexual, aged 19–56, having various entitlements to study, and registered for three different types of programme modes (full time programme; part time study with field placements in blocks; part time study involving concurrent field placement). Analysis showed that the interest in research courses was predicted by fear of research courses (β = -0.29) as well as by the perceived importance (β = 0.27), attributed usefulness of research (β = 0.15) and perceived unbiased nature of research (β = 0.08). These variables were predicted, in turn, by fear of research courses (β = -0.10, β = -0.23, and β = -0.13). Moreover, interest was predicted by age (β = 0.13). Fear of research courses was predicted by age (β = -0.10) female gender (β = 0.28) and having completed a general baccalaureate (β = -0.09). (GFI = 0.997, AGFI = 0.988, SRMR = 0.016, CMIN/df = 0.946, adj. R2 = 0.312). Findings evidence a direct as well as a mediated impact of fear on the interest in research courses in entering first-year students in a social work bachelor-programme. It highlights one of the challenges social work education in a research framework has to meet with. It seems, there have been considerable efforts to address the research orientation of students. However, these findings point out that, additionally, research anxiety in terms of fear of research courses should be considered and addressed by teachers when conceptualizing research courses.

Keywords: research anxiety, research courses, research interest, research orientation, social work students, teaching

Procedia PDF Downloads 188
204 Evaluation of Microbial Accumulation of Household Wastewater Purified by Advanced Oxidation Process

Authors: Nazlı Çetindağ, Pelin Yılmaz Çetiner, Metin Mert İlgün, Emine Birci, Gizemnur Yıldız Uysal, Özcan Hatipoğlu, Ehsan Tuzcuoğlu, Gökhan Sır

Abstract:

Water scarcity is an unavoidable issue impacting an increasing number of individuals daily, representing a global crisis stemming from swift population growth, urbanization, and excessive resource exploitation. Consequently, solutions that involve the reclamation of wastewater are considered essential. In this context, household wastewater, categorized as greywater, plays a significant role in freshwater used for residential purposes and is attributed to washing. This type of wastewater comprises diverse elements, including organic substances, soaps, detergents, solvents, biological components, and inorganic elements such as certain metal ions and particles. The physical characteristics of wastewater vary depending on its source, whether commercial, domestic, or from a hospital setting. Consequently, the treatment strategy for this wastewater type necessitates comprehensive investigation and appropriate handling. The advanced oxidation process (AOP) emerges as a promising technique associated with the generation of reactive hydroxyl radicals highly effective in oxidizing organic pollutants. This method takes precedence over others like coagulation, flocculation, sedimentation, and filtration due to its avoidance of undesirable by-products. In the current study, the focus was on exploring the feasibility of the AOP for treating actual household wastewater. To achieve this, a laboratory-scale device was designed to effectively target the formed radicals toward organic pollutants, resulting in lower organic compounds in wastewater. Then, the number of microorganisms present in treated wastewater, in addition to the chemical content of the water, was analyzed to determine whether the lab-scale device eliminates microbial accumulation with AOP. This was also an important parameter since microbes can indirectly affect human health and machine hygiene. To do this, water samples were taken from treated and untreated conditions and then inoculated on general purpose agar to track down the total plate count. Analysis showed that AOP might be an option to treat household wastewater and lower microorganism growth.

Keywords: usage of household water, advanced oxidation process, water reuse, modelling

Procedia PDF Downloads 50
203 Mapping the Turbulence Intensity and Excess Energy Available to Small Wind Systems over 4 Major UK Cities

Authors: Francis C. Emejeamara, Alison S. Tomlin, James Gooding

Abstract:

Due to the highly turbulent nature of urban air flows, and by virtue of the fact that turbines are likely to be located within the roughness sublayer of the urban boundary layer, proposed urban wind installations are faced with major challenges compared to rural installations. The challenge of operating within turbulent winds can however, be counteracted by the development of suitable gust tracking solutions. In order to assess the cost effectiveness of such controls, a detailed understanding of the urban wind resource, including its turbulent characteristics, is required. Estimating the ambient turbulence and total kinetic energy available at different control response times is essential in evaluating the potential performance of wind systems within the urban environment should effective control solutions be employed. However, high resolution wind measurements within the urban roughness sub-layer are uncommon, and detailed CFD modelling approaches are too computationally expensive to apply routinely on a city wide scale. This paper therefore presents an alternative semi-empirical methodology for estimating the excess energy content (EEC) present in the complex and gusty urban wind. An analytical methodology for predicting the total wind energy available at a potential turbine site is proposed by assessing the relationship between turbulence intensities and EEC, for different control response times. The semi-empirical model is then incorporated with an analytical methodology that was initially developed to predict mean wind speeds at various heights within the built environment based on detailed mapping of its aerodynamic characteristics. Based on the current methodology, additional estimates of turbulence intensities and EEC allow a more complete assessment of the available wind resource. The methodology is applied to 4 UK cities with results showing the potential of mapping turbulence intensities and the total wind energy available at different heights within each city. Considering the effect of ambient turbulence and choice of wind system, the wind resource over neighbourhood regions (of 250 m uniform resolution) and building rooftops within the 4 cities were assessed with results highlighting the promise of mapping potential turbine sites within each city.

Keywords: excess energy content, small-scale wind, turbulence intensity, urban wind energy, wind resource assessment

Procedia PDF Downloads 474
202 Uncertainty Quantification of Fuel Compositions on Premixed Bio-Syngas Combustion at High-Pressure

Authors: Kai Zhang, Xi Jiang

Abstract:

Effect of fuel variabilities on premixed combustion of bio-syngas mixtures is of great importance in bio-syngas utilisation. The uncertainties of concentrations of fuel constituents such as H2, CO and CH4 may lead to unpredictable combustion performances, combustion instabilities and hot spots which may deteriorate and damage the combustion hardware. Numerical modelling and simulations can assist in understanding the behaviour of bio-syngas combustion with pre-defined species concentrations, while the evaluation of variabilities of concentrations is expensive. To be more specific, questions such as ‘what is the burning velocity of bio-syngas at specific equivalence ratio?’ have been answered either experimentally or numerically, while questions such as ‘what is the likelihood of burning velocity when precise concentrations of bio-syngas compositions are unknown, but the concentration ranges are pre-described?’ have not yet been answered. Uncertainty quantification (UQ) methods can be used to tackle such questions and assess the effects of fuel compositions. An efficient probabilistic UQ method based on Polynomial Chaos Expansion (PCE) techniques is employed in this study. The method relies on representing random variables (combustion performances) with orthogonal polynomials such as Legendre or Gaussian polynomials. The constructed PCE via Galerkin Projection provides easy access to global sensitivities such as main, joint and total Sobol indices. In this study, impacts of fuel compositions on combustion (adiabatic flame temperature and laminar flame speed) of bio-syngas fuel mixtures are presented invoking this PCE technique at several equivalence ratios. High-pressure effects on bio-syngas combustion instability are obtained using detailed chemical mechanism - the San Diego Mechanism. Guidance on reducing combustion instability from upstream biomass gasification process is provided by quantifying the significant contributions of composition variations to variance of physicochemical properties of bio-syngas combustion. It was found that flame speed is very sensitive to hydrogen variability in bio-syngas, and reducing hydrogen uncertainty from upstream biomass gasification processes can greatly reduce bio-syngas combustion instability. Variation of methane concentration, although thought to be important, has limited impacts on laminar flame instabilities especially for lean combustion. Further studies on the UQ of percentage concentration of hydrogen in bio-syngas can be conducted to guide the safer use of bio-syngas.

Keywords: bio-syngas combustion, clean energy utilisation, fuel variability, PCE, targeted uncertainty reduction, uncertainty quantification

Procedia PDF Downloads 276
201 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School

Authors: Martín Pratto Burgos

Abstract:

The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.

Keywords: machine-learning, engineering, university, education, computational models

Procedia PDF Downloads 94
200 The Taste of Macau: An Exploratory Study of Destination Food Image

Authors: Jianlun Zhang, Christine Lim

Abstract:

Local food is one of the most attractive elements to tourists. The role of local cuisine in destination branding is very important because it is the distinctive identity that helps tourists remember the destination. The objectives of this study are: (1) Test the direct relation between the cognitive image of destination food and tourists’ intention to eat local food. (2) Examine the mediating effect of tourists’ desire to try destination food on the relationship between the cognitive image of local food and tourists’ intention to eat destination food. (3) Study the moderating effect of tourists’ perceived difficulties in finding local food on the relationship between tourists’ desire to try destination food and tourists’ intention to eat local food. To achieve the goals of this study, Macanese cuisine is selected as the destination food. Macau is located in Southeastern China and is a former colonial city of Portugal. The taste and texture of Macanese cuisine are unique because it is a fusion of cuisine from many countries and regions of mainland China. As people travel to seek authentically exotic experience, it is important to investigate if the food image of Macau leaves a good impression on tourists and motivate them to try local cuisine. A total of 449 Chinese tourists were involved in this study. To analyze the data collected, partial least square-structural equation modelling (PLS-SEM) technique is employed. Results suggest that the cognitive image of Macanese cuisine has a direct effect on tourists’ intention to eat Macanese cuisine. Tourists’ desire to try Macanese cuisine mediates the cognitive image-intention relationship. Tourists’ perceived difficulty of finding Macanese cuisine moderates the desire-intention relationship. The lower tourists’ perceived difficulty in finding Macanese cuisine is, the stronger the desire-intention relationship it will be. There are several practical implications of this study. First, the government tourism website can develop an authentic storyline about the evolvement of local cuisine, which provides an opportunity for tourists to taste the history of the destination and create a novel experience for them. Second, the government should consider the development of food events, restaurants, and hawker businesses. Third, to lower tourists’ perceived difficulty in finding local cuisine, there should be locations of restaurants and hawker stalls with clear instructions for finding them on the websites of the government tourism office, popular tourism sites, and public transportation stations in the destination. Fourth, in the post-COVID-19 era, travel risk will be a major concern for tourists. Therefore, when promoting local food, the government tourism website should post images that show food safety and hygiene.

Keywords: cognitive image of destination food, desire to try destination food, intention to eat food in the destination, perceived difficulties of finding local cuisine, PLS-SEM

Procedia PDF Downloads 189
199 Surgical Hip Dislocation of Femoroacetabular Impingement: Survivorship and Functional Outcomes at 10 Years

Authors: L. Hoade, O. O. Onafowokan, K. Anderson, G. E. Bartlett, E. D. Fern, M. R. Norton, R. G. Middleton

Abstract:

Aims: Femoroacetabular impingement (FAI) was first recognised as a potential driver for hip pain at the turn of the last millennium. While there is an increasing trend towards surgical management of FAI by arthroscopic means, open surgical hip dislocation and debridement (SHD) remains the Gold Standard of care in terms of reported outcome measures. (1) Long-term functional and survivorship outcomes of SHD as a treatment for FAI are yet to be sufficiently reported in the literature. This study sets out to help address this imbalance. Methods: We undertook a retrospective review of our institutional database for all patients who underwent SHD for FAI between January 2003 and December 2008. A total of 223 patients (241 hips) were identified and underwent a ten year review with a standardised radiograph and patient-reported outcome measures questionnaire. The primary outcome measure of interest was survivorship, defined as progression to total hip arthroplasty (THA). Negative predictive factors were analysed. Secondary outcome measures of interest were survivorship to further (non-arthroplasty) surgery, functional outcomes as reflected by patient reported outcome measure scores (PROMS) scores, and whether a learning curve could be identified. Results: The final cohort consisted of 131 females and 110 males, with a mean age of 34 years. There was an overall native hip joint survival rate of 85.4% at ten years. Those who underwent a THA were significantly older at initial surgery, had radiographic evidence of preoperative osteoarthritis and pre- and post-operative acetabular undercoverage. In those whom had not progressed to THA, the average Non-arthritic Hip Score and Oxford Hip Score at ten year follow-up were 72.3% and 36/48, respectively, and 84% still deemed their surgery worthwhile. A learning curve was found to exist that was predicated on case selection rather than surgical technique. Conclusion: This is only the second study to evaluate the long-term outcomes (beyond ten years) of SHD for FAI and the first outside the originating centre. Our results suggest that, with correct patient selection, this remains an operation with worthwhile outcomes at ten years. How the results of open surgery compared to those of arthroscopy remains to be answered. While these results precede the advent of collison software modelling tools, this data helps set a benchmark for future comparison of other techniques effectiveness at the ten year mark.

Keywords: femoroacetabular impingement, hip pain, surgical hip dislocation, hip debridement

Procedia PDF Downloads 84
198 Numerical Modelling and Experiment of a Composite Single-Lap Joint Reinforced by Multifunctional Thermoplastic Composite Fastener

Authors: Wenhao Li, Shijun Guo

Abstract:

Carbon fibre reinforced composites are progressively replacing metal structures in modern civil aircraft. This is because composite materials have large potential of weight saving compared with metal. However, the achievement to date of weight saving in composite structure is far less than the theoretical potential due to many uncertainties in structural integrity and safety concern. Unlike the conventional metallic structure, composite components are bonded together along the joints where structural integrity is a major concern. To ensure the safety, metal fasteners are used to reinforce the composite bonded joints. One of the solutions for a significant weight saving of composite structure is to develop an effective technology of on-board Structural Health Monitoring (SHM) System. By monitoring the real-life stress status of composite structures during service, the safety margin set in the structure design can be reduced with confidence. It provides a means of safeguard to minimize the need for programmed inspections and allow for maintenance to be need-driven, rather than usage-driven. The aim of this paper is to develop smart composite joint. The key technology is a multifunctional thermoplastic composite fastener (MTCF). The MTCF will replace some of the existing metallic fasteners in the most concerned locations distributed over the aircraft composite structures to reinforce the joints and form an on-board SHM network system. Each of the MTCFs will work as a unit of the AU and AE technology. The proposed MTCF technology has been patented and developed by Prof. Guo in Cranfield University, UK in the past a few years. The manufactured MTCF has been successfully employed in the composite SLJ (Single-Lap Joint). In terms of the structure integrity, the hybrid SLJ reinforced by MTCF achieves 19.1% improvement in the ultimate failure strength in comparison to the bonded SLJ. By increasing the diameter or rearranging the lay-up sequence of MTCF, the hybrid SLJ reinforced by MTCF is able to achieve the equivalent ultimate strength as that reinforced by titanium fastener. The predicted ultimate strength in simulation is in good agreement with the test results. In terms of the structural health monitoring, a signal from the MTCF was measured well before the load of mechanical failure. This signal provides a warning of initial crack in the joint which could not be detected by the strain gauge until the final failure.

Keywords: composite single-lap joint, crack propagation, multifunctional composite fastener, structural health monitoring

Procedia PDF Downloads 163
197 Algorithm for Modelling Land Surface Temperature and Land Cover Classification and Their Interaction

Authors: Jigg Pelayo, Ricardo Villar, Einstine Opiso

Abstract:

The rampant and unintended spread of urban areas resulted in increasing artificial component features in the land cover types of the countryside and bringing forth the urban heat island (UHI). This paved the way to wide range of negative influences on the human health and environment which commonly relates to air pollution, drought, higher energy demand, and water shortage. Land cover type also plays a relevant role in the process of understanding the interaction between ground surfaces with the local temperature. At the moment, the depiction of the land surface temperature (LST) at city/municipality scale particularly in certain areas of Misamis Oriental, Philippines is inadequate as support to efficient mitigations and adaptations of the surface urban heat island (SUHI). Thus, this study purposely attempts to provide application on the Landsat 8 satellite data and low density Light Detection and Ranging (LiDAR) products in mapping out quality automated LST model and crop-level land cover classification in a local scale, through theoretical and algorithm based approach utilizing the principle of data analysis subjected to multi-dimensional image object model. The paper also aims to explore the relationship between the derived LST and land cover classification. The results of the presented model showed the ability of comprehensive data analysis and GIS functionalities with the integration of object-based image analysis (OBIA) approach on automating complex maps production processes with considerable efficiency and high accuracy. The findings may potentially lead to expanded investigation of temporal dynamics of land surface UHI. It is worthwhile to note that the environmental significance of these interactions through combined application of remote sensing, geographic information tools, mathematical morphology and data analysis can provide microclimate perception, awareness and improved decision-making for land use planning and characterization at local and neighborhood scale. As a result, it can aid in facilitating problem identification, support mitigations and adaptations more efficiently.

Keywords: LiDAR, OBIA, remote sensing, local scale

Procedia PDF Downloads 282
196 Depth-Averaged Modelling of Erosion and Sediment Transport in Free-Surface Flows

Authors: Thomas Rowan, Mohammed Seaid

Abstract:

A fast finite volume solver for multi-layered shallow water flows with mass exchange and an erodible bed is developed. This enables the user to solve a number of complex sediment-based problems including (but not limited to), dam-break over an erodible bed, recirculation currents and bed evolution as well as levy and dyke failure. This research develops methodologies crucial to the under-standing of multi-sediment fluvial mechanics and waterway design. In this model mass exchange between the layers is allowed and, in contrast to previous models, sediment and fluid are able to transfer between layers. In the current study we use a two-step finite volume method to avoid the solution of the Riemann problem. Entrainment and deposition rates are calculated for the first time in a model of this nature. In the first step the governing equations are rewritten in a non-conservative form and the intermediate solutions are calculated using the method of characteristics. In the second stage, the numerical fluxes are reconstructed in conservative form and are used to calculate a solution that satisfies the conservation property. This method is found to be considerably faster than other comparative finite volume methods, it also exhibits good shock capturing. For most entrainment and deposition equations a bed level concentration factor is used. This leads to inaccuracies in both near bed level concentration and total scour. To account for diffusion, as no vertical velocities are calculated, a capacity limited diffusion coefficient is used. The additional advantage of this multilayer approach is that there is a variation (from single layer models) in bottom layer fluid velocity: this dramatically reduces erosion, which is often overestimated in simulations of this nature using single layer flows. The model is used to simulate a standard dam break. In the dam break simulation, as expected, the number of fluid layers utilised creates variation in the resultant bed profile, with more layers offering a higher deviation in fluid velocity . These results showed a marked variation in erosion profiles from standard models. The overall the model provides new insight into the problems presented at minimal computational cost.

Keywords: erosion, finite volume method, sediment transport, shallow water equations

Procedia PDF Downloads 217
195 Structural Equation Modelling Based Approach to Integrate Customers and Suppliers with Internal Practices for Lean Manufacturing Implementation in the Indian Context

Authors: Protik Basu, Indranil Ghosh, Pranab K. Dan

Abstract:

Lean management is an integrated socio-technical system to bring about a competitive state in an organization. The purpose of this paper is to explore and integrate the role of customers and suppliers with the internal practices of the Indian manufacturing industries towards successful implementation of lean manufacturing (LM). An extensive literature survey is carried out. An attempt is made to build an exhaustive list of all the input manifests related to customers, suppliers and internal practices necessary for LM implementation, coupled with a similar exhaustive list of the benefits accrued from its successful implementation. A structural model is thus conceptualized, which is empirically validated based on the data from the Indian manufacturing sector. With the current impetus on developing the industrial sector, the Government of India recently introduced the Lean Manufacturing Competitiveness Scheme that aims to increase competitiveness with the help of lean concepts. There is a huge scope to enrich the Indian industries with the lean benefits, the implementation status being quite low. Hardly any survey-based empirical study in India has been found to integrate customers and suppliers with the internal processes towards successful LM implementation. This empirical research is thus carried out in the Indian manufacturing industries. The basic steps of the research methodology followed in this research are the identification of input and output manifest variables and latent constructs, model proposition and hypotheses development, development of survey instrument, sampling and data collection and model validation (exploratory factor analysis, confirmatory factor analysis, and structural equation modeling). The analysis reveals six key input constructs and three output constructs, indicating that these constructs should act in unison to maximize the benefits of implementing lean. The structural model presented in this paper may be treated as a guide to integrating customers and suppliers with internal practices to successfully implement lean. Integrating customers and suppliers with internal practices into a unified, coherent manufacturing system will lead to an optimum utilization of resources. This work is one of the very first researches to have a survey-based empirical analysis of the role of customers, suppliers and internal practices of the Indian manufacturing sector towards an effective lean implementation.

Keywords: customer management, internal manufacturing practices, lean benefits, lean implementation, lean manufacturing, structural model, supplier management

Procedia PDF Downloads 179
194 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 117
193 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers

Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi

Abstract:

Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.

Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics

Procedia PDF Downloads 171
192 Computational Modelling of pH-Responsive Nanovalves in Controlled-Release System

Authors: Tomilola J. Ajayi

Abstract:

A category of nanovalves system containing the α-cyclodextrin (α-CD) ring on a stalk tethered to the pores of mesoporous silica nanoparticles (MSN) is theoretically and computationally modelled. This functions to control opening and blocking of the MSN pores for efficient targeted drug release system. Modeling of the nanovalves is based on the interaction between α-CD and the stalk (p-anisidine) in relation to pH variation. Conformational analysis was carried out prior to the formation of the inclusion complex, to find the global minimum of both neutral and protonated stalk. B3LYP/6-311G**(d, p) basis set was employed to attain all theoretically possible conformers of the stalk. Six conformers were taken into considerations, and the dihedral angle (θ) around the reference atom (N17) of the p-anisidine stalk was scanned from 0° to 360° at 5° intervals. The most stable conformer was obtained at a dihedral angle of 85.3° and was fully optimized at B3LYP/6-311G**(d, p) level of theory. The most stable conformer obtained from conformational analysis was used as the starting structure to create the inclusion complexes. 9 complexes were formed by moving the neutral guest into the α-CD cavity along the Z-axis in 1 Å stepwise while keeping the distance between dummy atom and OMe oxygen atom on the stalk restricted. The dummy atom and the carbon atoms on α-CD structure were equally restricted for orientation A (see Scheme 1). The generated structures at each step were optimized with B3LYP/6-311G**(d, p) methods to determine their energy minima. Protonation of the nitrogen atom on the stalk occurs at acidic pH, leading to unsatisfactory host-guest interaction in the nanogate; hence there is dethreading. High required interaction energy and conformational change are theoretically established to drive the release of α-CD at a certain pH. The release was found to occur between pH 5-7 which agreed with reported experimental results. In this study, we applied the theoretical model for the prediction of the experimentally observed pH-responsive nanovalves which enables blocking, and opening of mesoporous silica nanoparticles pores for targeted drug release system. Our results show that two major factors are responsible for the cargo release at acidic pH. The higher interaction energy needed for the complex/nanovalve formation to exist after protonation as well as conformational change upon protonation are driving the release due to slight pH change from 5 to 7.

Keywords: nanovalves, nanogate, mesoporous silica nanoparticles, cargo

Procedia PDF Downloads 123
191 Inner Quality Parameters of Rapeseed (Brassica napus) Populations in Different Sowing Technology Models

Authors: É. Vincze

Abstract:

Demand on plant oils has increased to an enormous extent that is due to the change of human nutrition habits on the one hand, while on the other hand to the increase of raw material demand of some industrial sectors, just as to the increase of biofuel production. Besides the determining importance of sunflower in Hungary the production area, just as in part the average yield amount of rapeseed has increased among the produced oil crops. The variety/hybrid palette has changed significantly during the past decade. The available varieties’/hybrids’ palette has been extended to a significant extent. It is agreed that rapeseed production demands professionalism and local experience. Technological elements are successive; high yield amounts cannot be produced without system-based approach. The aim of the present work was to execute the complex study of one of the most critical production technology element of rapeseed production, that was sowing technology. Several sowing technology elements are studied in this research project that are the following: biological basis (the hybrid Arkaso is studied in this regard), sowing time (sowing time treatments were set so that they represent the wide period used in industrial practice: early, optimal and late sowing time) plant density (in this regard reaction of rare, optimal and too dense populations) were modelled. The multifactorial experimental system enables the single and complex evaluation of rapeseed sowing technology elements, just as their modelling using experimental result data. Yield quality and quantity have been determined as well in the present experiment, just as the interactions between these factors. The experiment was set up in four replications at the Látókép Plant Production Research Site of the University of Debrecen. Two different sowing times were sown in the first experimental year (2014), while three in the second (2015). Three different plant densities were set in both years: 200, 350 and 500 thousand plants ha-1. Uniform nutrient supply and a row spacing of 45 cm were applied. Winter wheat was used as pre-crop. Plant physiological measurements were executed in the populations of the Arkaso rapeseed hybrid that were: relative chlorophyll content analysis (SPAD) and leaf area index (LAI) measurement. Relative chlorophyll content (SPAD) and leaf area index (LAI) were monitored in 7 different measurement times.

Keywords: inner quality, plant density, rapeseed, sowing time

Procedia PDF Downloads 200
190 Use of Bamboo Piles in Ground Improvement Design: Case Study

Authors: Thayalan Nall, Andreas Putra

Abstract:

A major offshore reclamation work is currently underway in Southeast Asia for a container terminal. The total extent of the reclamation extent is 2600m x 800m and the seabed level is around -5mRL below mean sea level. Subsoil profile below seabed comprises soft marine clays of thickness varying from 8m to 15m. To contain the dredging spoil within the reclamation area, perimeter bunds have been constructed to +2.5mRL. They include breakwaters of trapezoidal geometry, made of boulder size rock along the northern, eastern and western perimeters, with a sand bund along the southern perimeter. Breakwaters were constructed on a composite bamboo pile and raft foundation system. Bamboo clusters 8m long, with 7 individual Bamboos bundled together as one, have been installed within the footprint of the breakwater below seabed in soft marine clay. To facilitate drainage two prefabricated vertical drains (PVD) have been attached to each cluster. Once the cluster piles were installed, a bamboo raft was placed as a load transfer platform. Rafts were made up of 5 layers of bamboo mattress, and in each layer bamboos were spaced at 200mm centres. The rafts wouldn’t sink under their own weight, and hence, they were sunk by loading quarry run rock onto them. Bamboo is a building material available in abundance in Indonesia and obtained at a relatively low cost. They are commonly used as semi-rigid inclusions to improve compressibility and stability of soft soils. Although bamboo is widely used in soft soil engineering design, no local design guides are available and the designs are carried out based on local experience. In June 2015, when the 1st load of sand was pumped by a dredging vessel next to the breakwater, a 150m long section of the breakwater underwent failure and displaced the breakwater between 1.2m to 4.0m. The cause of the failure was investigated to implement remedial measures to reduce the risk of further failures. Analyses using both limit equilibrium approach and finite element modelling revealed two plausible modes of breakwater failure. This paper outlines: 1) Developed Geology and the ground model, 2) The techniques used for the installation of bamboo piles, 3) Details of the analyses including modes and mechanism of failure and 4) Design changes incorporated to reduce the risk of failure.

Keywords: bamboo piles, ground improvement, reclamation, breakwater failure

Procedia PDF Downloads 417
189 Finite Element Modelling of Mechanical Connector in Steel Helical Piles

Authors: Ramon Omar Rosales-Espinoza

Abstract:

Pile-to-pile mechanical connections are used if the depth of the soil layers with sufficient bearing strength exceeds the original (“leading”) pile length, with the additional pile segment being termed “extension” pile. Mechanical connectors permit a safe transmission of forces from leading to extension pile while meeting strength and serviceability requirements. Common types of connectors consist of an assembly of sleeve-type external couplers, bolts, pins, and other mechanical interlock devices that ensure the transmission of compressive, tensile, torsional and bending stresses between leading and extension pile segments. While welded connections allow for a relatively simple structural design, mechanical connections are advantageous over welded connections because they lead to shorter installation times and significant cost reductions since specialized workmanship and inspection activities are not required. However, common practices followed to design mechanical connectors neglect important aspects of the assembly response, such as stress concentration around pin/bolt holes, torsional stresses from the installation process, and interaction between the forces at the installation (torsion), service (compression/tension-bending), and removal stages (torsion). This translates into potentially unsatisfactory designs in terms of the ultimate and service limit states, exhibiting either reduced strength or excessive deformations. In this study, the experimental response under compressive forces of a type of mechanical connector is presented, in terms of strength, deformation and failure modes. The tests revealed that the type of connector used can safely transmit forces from pile to pile. Using the results from the compressive tests, an analysis model was developed using the finite element (FE) method to study the interaction of forces under installation and service stages of a typical mechanical connector. The response of the analysis model is used to identify potential areas for design optimization, including size, gap between leading and extension piles, number of pin/bolts, hole sizes, and material properties. The results show the design of mechanical connectors should take into account the interaction of forces present at every stage of their life cycle, and that the torsional stresses occurring during installation are critical for the safety of the assembly.

Keywords: piles, FEA, steel, mechanical connector

Procedia PDF Downloads 264
188 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism

Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape

Abstract:

Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.

Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders

Procedia PDF Downloads 24
187 A Modelling of Main Bearings in the Two-Stroke Diesel Engine

Authors: Marcin Szlachetka, Rafal Sochaczewski, Lukasz Grabowski

Abstract:

This paper presents the results of the load simulations of main bearings in a two-stroke Diesel engine. A model of an engine lubrication system with connections of its main lubrication nodes, i.e., a connection of its main bearings in the engine block with the crankshaft, a connection of its crankpins with its connecting rod and a connection of its pin and its piston has been created for our calculations performed using the AVL EXCITE Designer. The analysis covers the loads given as a pressure distribution in a hydrodynamic oil film, a temperature distribution on the main bush surfaces for the specified radial clearance values as well as the impact of the force of gas on the minimum oil film thickness in the main bearings depending on crankshaft rotational speeds and temperatures of oil in the bearings. One of the main goals of the research has been to determine whether the minimum thickness of the oil film at which fluid friction occurs can be achieved for each value of crankshaft speed. Our model calculates different oil film parameters, i.e., its thickness, a pressure distribution there, the change in oil temperature. Additional enables an analysis of an oil temperature distribution on the surfaces of the bearing seats. It allows verifying the selected clearances in the bearings of the main engine under normal operation conditions and extremal ones that show a significant increase in temperature above the limit value. The research has been conducted for several engine crankshaft speeds ranging from 1000 rpm to 4000 rpm. The oil pressure in the bearings has ranged 2-5 bar according to engine speeds and the oil temperature has ranged 90-120 °C. The main bearing clearance has been adopted for the calculation and analysis as 0.025 mm. The oil classified as SAE 5W-30 has been used for the simulations. The paper discusses the selected research results referring to several specific operating points and different temperatures of the lubricating oil in the bearings. The received research results show that for the investigated main bearing bushes of the shaft, the results fall within the ranges of the limit values despite the increase in the oil temperature of the bearings reaching 120˚C. The fact that the bearings are loaded with the maximum pressure makes no excessive temperature rise on the bush surfaces. The oil temperature increases by 17˚C, reaching 137˚C at a speed of 4000 rpm. The minimum film thickness at which fluid friction occurs has been achieved for each of the operating points at each of the engine crankshaft speeds. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A.’ and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: diesel engine, main bearings, opposing pistons, two-stroke

Procedia PDF Downloads 137
186 The Effect of Swirl on the Flow Distribution in Automotive Exhaust Catalysts

Authors: Piotr J. Skusiewicz, Johnathan Saul, Ijhar Rusli, Svetlana Aleksandrova, Stephen. F. Benjamin, Miroslaw Gall, Steve Pierson, Carol A. Roberts

Abstract:

The application of turbocharging in automotive engines leads to swirling flow entering the catalyst. The behaviour of this type of flow within the catalyst has yet to be adequately documented. This work discusses the effect of swirling flow on the flow distribution in automotive exhaust catalysts. Compressed air supplied to a moving-block swirl generator allowed for swirling flow with variable intensities to be generated. Swirl intensities were measured at the swirl generator outlet using single-sensor hot-wire probes. The swirling flow was fed into diffusers with total angles of 10°, 30° and 180°. Downstream of the diffusers, a wash-coated diesel oxidation catalyst (DOC) of length 143.8 mm, diameter 76.2 mm and nominal cell density of 400 cpsi was fitted. Velocity profiles were measured at the outlet sleeve about 30 mm downstream of the monolith outlet using single-sensor hot-wire probes. Wall static pressure was recorded using a multi-tube manometer connected to pressure taps positioned along the diffuser walls. The results show that as swirl is increased, more of the flow is directed towards the diffuser walls. The velocity decreases around the centre-line and maximum velocities are observed close to the outer radius of the monolith for all flow rates. At the maximum swirl intensity, reversed flow was recorded near the centre of the monolith. Wall static pressure measurements in the 180° diffuser indicated no pressure recovery as the flow enters the diffuser. This is indicative of flow separation at the inlet to the diffuser. To gain insight into the flow structure, CFD simulations have been performed for the 180° diffuser for a flow rate of 63 g/s. The geometry of the model consists of the complete assembly from the upstream swirl generator to the outlet sleeve. Modelling of the flow in the monolith was achieved using the porous medium approach, where the monolith with parallel flow channels is modelled as a porous medium that resists the flow. A reasonably good agreement was achieved between the experimental and CFD results downstream of the monolith. The CFD simulations allowed visualisation of the separation zones and central toroidal recirculation zones that occur within the expansion region at certain swirl intensities which are highlighted.

Keywords: catalyst, computational fluid dynamics, diffuser, hot-wire anemometry, swirling flow

Procedia PDF Downloads 304
185 Assessment of Hydrologic Response of a Naturalized Tropical Coastal Mangrove Ecosystem Due to Land Cover Change in an Urban Watershed

Authors: Bryan Clark B. Hernandez, Eugene C. Herrera, Kazuo Nadaoka

Abstract:

Mangrove forests thriving in intertidal zones in tropical and subtropical regions of the world offer a range of ecosystem services including carbon storage and sequestration. They can regulate the detrimental effects of climate change due to carbon releases two to four times greater than that of mature tropical rainforests. Moreover, they are effective natural defenses against storm surges and tsunamis. However, their proliferation depends significantly on the prevailing hydroperiod at the coast. In the Philippines, these coastal ecosystems have been severely threatened with a 50% decline in areal extent observed from 1918 to 2010. The highest decline occurred in 1950 - 1972 when national policies encouraged the development of fisheries and aquaculture. With the intensive land use conversion upstream, changes in the freshwater-saltwater envelope at the coast may considerably impact mangrove growth conditions. This study investigates a developing urban watershed in Kalibo, Aklan province with a 220-hectare mangrove forest replanted for over 30 years from coastal mudflats. Since then, the mangrove forest was sustainably conserved and declared as protected areas. Hybrid land cover classification technique was used to classify Landsat images for years, 1990, 2010, and 2017. Digital elevation model utilized was Interferometric Synthetic Aperture Radar (IFSAR) with a 5-meter resolution to delineate the watersheds. Using numerical modelling techniques, the hydrologic and hydraulic analysis of the influence of land cover change to flow and sediment dynamics was simulated. While significant land cover change occurred upland, thereby increasing runoff and sediment loads, the mangrove forests abundance adjacent to the coasts for the urban watershed, was somehow sustained. However, significant alteration of the coastline was observed in Kalibo through the years, probably due to the massive land-use conversion upstream and significant replanting of mangroves downstream. Understanding the hydrologic-hydraulic response of these watersheds to change land cover is essential to helping local government and stakeholders facilitate better management of these mangrove ecosystems.

Keywords: coastal mangroves, hydrologic model, land cover change, Philippines

Procedia PDF Downloads 122
184 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 107
183 Clean Sky 2 – Project PALACE: Aeration’s Experimental Sound Velocity Investigations for High-Speed Gerotor Simulations

Authors: Benoît Mary, Thibaut Gras, Gaëtan Fagot, Yvon Goth, Ilyes Mnassri-Cetim

Abstract:

A Gerotor pump is composed of an external and internal gear with conjugate cycloidal profiles. From suction to delivery ports, the fluid is transported inside cavities formed by teeth and driven by the shaft. From a geometric and conceptional side it is worth to note that the internal gear has one tooth less than the external one. Simcenter Amesim v.16 includes a new submodel for modelling the hydraulic Gerotor pumps behavior (THCDGP0). This submodel considers leakages between teeth tips using Poiseuille and Couette flows contributions. From the 3D CAD model of the studied pump, the “CAD import” tool takes out the main geometrical characteristics and the submodel THCDGP0 computes the evolution of each cavity volume and their relative position according to the suction or delivery areas. This module, based on international publications, presents robust results up to 6 000 rpm for pressure greater than atmospheric level. For higher rotational speeds or lower pressures, oil aeration and cavitation effects are significant and highly drop the pump’s performance. The liquid used in hydraulic systems always contains some gas, which is dissolved in the liquid at high pressure and tends to be released in a free form (i.e. undissolved as bubbles) when pressure drops. In addition to gas release and dissolution, the liquid itself may vaporize due to cavitation. To model the relative density of the equivalent fluid, modified Henry’s law is applied in Simcenter Amesim v.16 to predict the fraction of undissolved gas or vapor. Three parietal pressure sensors have been set up upstream from the pump to estimate the sound speed in the oil. Analytical models have been compared with the experimental sound speed to estimate the occluded gas content. Simcenter Amesim v.16 model was supplied by these previous analyses marks which have successfully improved the simulations results up to 14 000 rpm. This work provides a sound foundation for designing the next Gerotor pump generation reaching high rotation range more than 25 000 rpm. This improved module results will be compared to tests on this new pump demonstrator.

Keywords: gerotor pump, high speed, numerical simulations, aeronautic, aeration, cavitation

Procedia PDF Downloads 133