Search results for: ageing model
12251 Response Regimes and Vibration Mitigation in Equivalent Mechanical Model of Strongly Nonlinear Liquid Sloshing
Authors: Maor Farid, Oleg Gendelman
Abstract:
Equivalent mechanical model of liquid sloshing in partially-filled cylindrical vessel is treated in the cases of free oscillations and of horizontal base excitation. The model is designed to cover both the linear and essentially nonlinear sloshing regimes. The latter fluid behaviour might involve hydraulic impacts interacting with the inner walls of the tank. These impulsive interactions are often modeled by high-power potential and dissipation functions. For the sake of analytical description, we use the traditional approach by modeling the impacts with velocity-dependent restitution coefficient. This modelling is similar to vibro-impact nonlinear energy sink (VI NES) which was recently explored for its vibration mitigation performances and nonlinear response regimes. Steady-state periodic regimes and chaotic strongly modulated responses (CSMR) are detected. Those dynamical regimes were described by the system's slow motion on the slow invariant manifold (SIM). There is a good agreement between the analytical results and numerical simulations. Subsequently, Finite-Element (FE) method is used to determine and verify the model parameters and to identify dominant dynamical regimes, natural modes and frequencies. The tank failure modes are identified and critical locations are identified. Mathematical relation is found between degrees-of-freedom (DOFs) motion and the mechanical stress applied in the tank critical section. This is the prior attempt to take under consideration large-amplitude nonlinear sloshing and tank structure elasticity effects for design, regulation definition and resistance analysis purposes. Both linear (tuned mass damper, TMD) and nonlinear (nonlinear energy sink, NES) passive energy absorbers contribution to the overall system mitigation is firstly examined, in terms of both stress reduction and time for vibration decay.Keywords: nonlinear energy sink (NES), reduced-order modelling, liquid sloshing, vibration mitigation, vibro-impact dynamics
Procedia PDF Downloads 14712250 Artificial Steady-State-Based Nonlinear MPC for Wheeled Mobile Robot
Authors: M. H. Korayem, Sh. Ameri, N. Yousefi Lademakhi
Abstract:
To ensure the stability of closed-loop nonlinear model predictive control (NMPC) within a finite horizon, there is a need for appropriate design terminal ingredients, which can be a time-consuming and challenging effort. Otherwise, in order to ensure the stability of the control system, it is necessary to consider an infinite predictive horizon. Increasing the prediction horizon increases computational demand and slows down the implementation of the method. In this study, a new technique has been proposed to ensure system stability without terminal ingredients. This technique has been employed in the design of the NMPC algorithm, leading to a reduction in the computational complexity of designing terminal ingredients and computational burden. The studied system is a wheeled mobile robot (WMR) subjected to non-holonomic constraints. Simulation has been investigated for two problems: trajectory tracking and adjustment mode.Keywords: wheeled mobile robot, nonlinear model predictive control, stability, without terminal ingredients
Procedia PDF Downloads 9312249 Coherent Optical Tomography Imaging of Epidermal Hyperplasia in Vivo in a Mouse Model of Oxazolone Induced Atopic Dermatitis
Authors: Eric Lacoste
Abstract:
Laboratory animals are currently widely used as a model of human pathologies in dermatology such as atopic dermatitis (AD). These models provide a better understanding of the pathophysiology of this complex and multifactorial disease, the discovery of potential new therapeutic targets and the testing of the efficacy of new therapeutics. However, confirmation of the correct development of AD is mainly based on histology from skin biopsies requiring invasive surgery or euthanasia of the animals, plus slicing and staining protocols. However, there are currently accessible imaging technologies such as Optical Coherence Tomography (OCT), which allows non-invasive visualization of the main histological structures of the skin (like stratum corneum, epidermis, and dermis) and assessment of the dynamics of the pathology or efficacy of new treatments. Briefly, female immunocompetent hairless mice (SKH1 strain) were sensitized and challenged topically on back and ears for about 4 weeks. Back skin and ears thickness were measured using calliper at 3 occasions per week in complement to a macroscopic evaluation of atopic dermatitis lesions on back: erythema, scaling and excoriations scoring. In addition, OCT was performed on the back and ears of animals. OCT allows a virtual in-depth section (tomography) of the imaged organ to be made using a laser, a camera and image processing software allowing fast, non-contact and non-denaturing acquisitions of the explored tissues. To perform the imaging sessions, the animals were anesthetized with isoflurane, placed on a support under the OCT for a total examination time of 5 to 10 minutes. The results show a good correlation of the OCT technique with classical HES histology for skin lesions structures such as hyperkeratosis, epidermal hyperplasia, and dermis thickness. This OCT imaging technique can, therefore, be used in live animals at different times for longitudinal evaluation by repeated measurements of lesions in the same animals, in addition to the classical histological evaluation. Furthermore, this original imaging technique speeds up research protocols, reduces the number of animals and refines the use of the laboratory animal.Keywords: atopic dermatitis, mouse model, oxzolone model, histology, imaging
Procedia PDF Downloads 13512248 Emotion Motives Predict the Mood States of Depression and Happiness
Authors: Paul E. Jose
Abstract:
A new self-report measure named the General Emotion Regulation Measure (GERM) assesses four key goals for experiencing broad valenced groups of emotions: 1) trying to experience positive emotions (e.g., joy, pride, liking a person); 2) trying to avoid experiencing positive emotions; 3) trying to experience negative emotions (e.g., anger, anxiety, contempt); and 4) trying to avoid experiencing negative emotions. Although individual differences in GERM motives have been identified, evidence of validity with common mood outcomes is lacking. In the present study, whether GERM motives predict self-reported subjective happiness and depressive symptoms (CES-D) was tested with a community sample of 833 young adults. It was predicted that the GERM motive of trying to experience positive emotions would positively predict subjective happiness, and analogously trying to experience negative emotions would predict depressive symptoms. An initial path model was constructed in which the four GERM motives predicted both subjective happiness and depressive symptoms. The fully saturated model included three non-significant paths, which were subsequently pruned, and a good fitting model was obtained (CFI = 1.00; RMR = .007). Two GERM motives significantly predicted subjective happiness: 1) trying to experience positive emotions ( = .38, p < .001) and 2) trying to avoid experiencing positive emotions ( = -.48, p <.001). Thus, individuals who reported high levels of trying to experience positive emotions reported high levels of happiness, and individuals who reported low levels of trying to avoid experiencing positive emotions also reported high levels of happiness. Three GERM motives significantly predicted depressive symptoms: 1) trying to avoid experiencing positive emotions ( = .20, p <.001); 2) trying to experience negative emotions ( = .15, p <.001); and 3) trying to experience positive emotions (= -.07, p <.001). In agreement with predictions, trying to experience positive emotions was positively associated with subjective happiness and trying to experience negative emotions was positively associated with depressive symptoms. In essence, these two valenced mood states seem to be sustained by trying to experience similarly valenced emotions. However, the three other significant paths in the model indicated that emotional motives play a complicated role in supporting both positive and negative mood states. For subjective happiness, the GERM motive of not trying to avoid positive emotions, i.e., not avoiding happiness, was also a strong predictor of happiness. Thus, people who report being the happiest are those individuals who not only strive to experience positive emotions but also are not ambivalent about them. The pattern for depressive symptoms was more nuanced. Individuals who reported higher depressive symptoms also reported higher levels of avoiding positive emotions and trying to experience negative emotions. The strongest predictor for depressed mood was avoiding positive emotions, which would suggest that happiness aversion or fear of happiness is an important motive for dysphoric people. Future work should determine whether these patterns of association are similar among clinically depressed people, and longitudinal data are needed to determine temporal relationships between motives and mood states.Keywords: emotions motives, depression, subjective happiness, path model
Procedia PDF Downloads 20612247 Understanding Mathematics Achievements among U. S. Middle School Students: A Bayesian Multilevel Modeling Analysis with Informative Priors
Authors: Jing Yuan, Hongwei Yang
Abstract:
This paper aims to understand U.S. middle school students’ mathematics achievements by examining relevant student and school-level predictors. Through a variance component analysis, the study first identifies evidence supporting the use of multilevel modeling. Then, a multilevel analysis is performed under Bayesian statistical inference where prior information is incorporated into the modeling process. During the analysis, independent variables are entered sequentially in the order of theoretical importance to create a hierarchy of models. By evaluating each model using Bayesian fit indices, a best-fit and most parsimonious model is selected where Bayesian statistical inference is performed for the purpose of result interpretation and discussion. The primary dataset for Bayesian modeling is derived from the Program for International Student Assessment (PISA) in 2012 with a secondary PISA dataset from 2003 analyzed under the traditional ordinary least squares method to provide the information needed to specify informative priors for a subset of the model parameters. The dependent variable is a composite measure of mathematics literacy, calculated from an exploratory factor analysis of all five PISA 2012 mathematics achievement plausible values for which multiple evidences are found supporting data unidimensionality. The independent variables include demographics variables and content-specific variables: mathematics efficacy, teacher-student ratio, proportion of girls in the school, etc. Finally, the entire analysis is performed using the MCMCpack and MCMCglmm packages in R.Keywords: Bayesian multilevel modeling, mathematics education, PISA, multilevel
Procedia PDF Downloads 33612246 Analysis and Prediction of the Behavior of the Landslide at Ain El Hammam, Algeria Based on the Second Order Work Criterion
Authors: Zerarka Hizia, Akchiche Mustapha, Prunier Florent
Abstract:
The landslide of Ain El Hammam (AEH) is characterized by a complex geology and a high hydrogeology hazard. AEH's perpetual reactivation compels us to look closely at its triggers and to better understand the mechanisms of its evolution in mass and in depth. This study builds a numerical model to simulate the influencing factors such as precipitation, non-saturation, and pore pressure fluctuations, using Plaxis software. For a finer analysis of instabilities, we use Hill's criterion, based on the sign of the second order work, which is the most appropriate material stability criterion for non-associated elastoplastic materials. The results of this type of calculation allow us, in theory, to predict the shape and position of the slip surface(s) which are liable to ground movements of the slope, before reaching the rupture given by the plastic limit of Mohr Coulomb. To validate the numerical model, an analysis of inclinometer measures is performed to confirm the direction of movement and kinematic of the sliding mechanism of AEH’s slope.Keywords: landslide, second order work, precipitation, inclinometers
Procedia PDF Downloads 18112245 Analysis of Brushless DC Motor with Trapezoidal Back EMF Using Matlab
Authors: Taha Ahmed Husain
Abstract:
The dynamic characteristics such as speed and torque as well as voltages and currents of pwm brushless DC motor inverter are analyzed with a MATLAB model. The contribution of external load torque and friction torque is monitored. The switching function technique is adopted for the current control of the embedded three phase inverter that drives the brushless DC motor.In switching functions the power conversions circuits can be modeled according to their functions rather than circuit topologies. Therefore, it can achieve simplification of the overall power conversion functions. The trapezoidal type (back emf) is used in the model as ithas lower switching loss compared with sinusoidal type (back emf). Results show reliable time analysis for speed, torque, phase and line voltages and currents and the effect of current commutation is clearly observed.Keywords: BLDC motor, brushless dc motors, pwm inverter, DC motor control, trapezoidal back emf, ripple torque in brushless DC motor
Procedia PDF Downloads 60012244 Consumption and Diffusion Based Model of Tissue Organoid Development
Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Simakov
Abstract:
In vitro organoid cultivation requires the simultaneous provision of necessary vascularization and nutrients perfusion of cells during organoid development. However, many aspects of this problem are still unsolved. The functionality of vascular network intergrowth is limited during early stages of organoid development since a function of the vascular network initiated on final stages of in vitro organoid cultivation. Therefore, a microchannel network should be created in early stages of organoid cultivation in hydrogel matrix aimed to conduct and maintain minimally required the level of nutrients perfusion for all cells in the expanding organoid. The network configuration should be designed properly in order to exclude hypoxic and necrotic zones in expanding organoid at all stages of its cultivation. In vitro vascularization is currently the main issue within the field of tissue engineering. As perfusion and oxygen transport have direct effects on cell viability and differentiation, researchers are currently limited only to tissues of few millimeters in thickness. These limitations are imposed by mass transfer and are defined by the balance between the metabolic demand of the cellular components in the system and the size of the scaffold. Current approaches include growth factor delivery, channeled scaffolds, perfusion bioreactors, microfluidics, cell co-cultures, cell functionalization, modular assembly, and in vivo systems. These approaches may improve cell viability or generate capillary-like structures within a tissue construct. Thus, there is a fundamental disconnect between defining the metabolic needs of tissue through quantitative measurements of oxygen and nutrient diffusion and the potential ease of integration into host vasculature for future in vivo implantation. A model is proposed for growth prognosis of the organoid perfusion based on joint simulations of general nutrient diffusion, nutrient diffusion to the hydrogel matrix through the contact surfaces and microchannels walls, nutrient consumption by the cells of expanding organoid, including biomatrix contraction during tissue development, which is associated with changed consumption rate of growing organoid cells. The model allows computing effective microchannel network design giving minimally required the level of nutrients concentration in all parts of growing organoid. It can be used for preliminary planning of microchannel network design and simulations of nutrients supply rate depending on the stage of organoid development.Keywords: 3D model, consumption model, diffusion, spheroid, tissue organoid
Procedia PDF Downloads 30912243 Developing a Health Promotion Program to Prevent and Solve Problem of the Frailty Elderly in the Community
Authors: Kunthida Kulprateepunya, Napat Boontiam, Bunthita Phuasa, Chatsuda Kankayant, Bantoeng Polsawat, Sumran Poontong
Abstract:
Frailty is the thin line between good health and illness. The syndrome is more common in the elderly who transition from strong to weak. (Vulnerability). Fragility can prevent and promote healthy recovery before it goes into disability. This research and development aim to analyze the situation analysis of frailty of the elderly, develop a program, and evaluate the effect of a health promotion program to prevent and solve the problem of frailty among the elderly. The research consisted of 3 phases: 1) analysis of the frailty situation, 2) development of a model, 3) evaluation of the effectiveness of the model. Samples were 328, 122 elderlies using the multi-stage random sampling method. The research instrument was a frailty questionnaire use of the five symptoms, the main characteristics were muscle weakness, slow walking, low physical activity. Fatigue and unintentional weight loss, criteria frailty use more than or equal to three or more symptoms are frailty. Data were analyzed by descriptive and t-test dependent test statistics. The findings showed three parts. First, frailty in the elderly was 23.05 percentage and 56.70% pre-frailty. Second, it was development of a health promotion program to prevent and solve the problem of frailty the elderly with a combination of Nine-Square Exercise, Elastic Band Exercise, Elastic Coconut Shell. Third, evaluation of the effectiveness of the model by comparison of the elderly's get up and go test, the average time before using the program was 14.42 and after using the program was 8.57. It was statistically significant at the .05 level. In conclusion, the findings can used to develop guidelines to promote the health of the frailty elderly.Keywords: elderly, fragile, nine-square exercise, elastic coconut shell
Procedia PDF Downloads 10612242 Is the Okun's Law Valid in Tunisia?
Authors: El Andari Chifaa, Bouaziz Rached
Abstract:
The central focus of this paper was to check whether the Okun’s law in Tunisia is valid or not. For this purpose, we have used quarterly time series data during the period 1990Q1-2014Q1. Firstly, we applied the error correction model instead of the difference version of Okun's Law, the Engle-Granger and Johansen test are employed to find out long run association between unemployment, production, and how error correction mechanism (ECM) is used for short run dynamic. Secondly, we used the gap version of Okun’s law where the estimation is done from three band pass filters which are mathematical tools used in macro-economic and especially in business cycles theory. The finding of the study indicates that the inverse relationship between unemployment and output is verified in the short and long term, and the Okun's law holds for the Tunisian economy, but with an Okun’s coefficient lower than required. Therefore, our empirical results have important implications for structural and cyclical policymakers in Tunisia to promote economic growth in a context of lower unemployment growth.Keywords: Okun’s law, validity, unit root, cointegration, error correction model, bandpass filters
Procedia PDF Downloads 31812241 Territorial Analysis of the Public Transport Supply: Case Study of Recife City
Authors: Cláudia Alcoforado, Anabela Ribeiro
Abstract:
This paper is part of an ongoing PhD thesis. It seeks to develop a model to identify the spatial failures of the public transportation supply. In the construction of the model, it also seeks to detect the social needs arising from the disadvantage in transport. The case study is carried out for the Brazilian city of Recife. Currently, Recife has a population density of 7,039.64 inhabitants per km². Unfortunately, only 46.9% of urban households on public roads have adequate urbanization. Allied to this reality, the trend of the occupation of the poorest population is that of the peripheries, a fact that has been consolidated in Brazil and Latin America, thus burdening the families' income, since the greater the distances covered for the basic activities and consequently also the transport costs. In this way, there have been great impacts caused by the supply of public transportation to locations with low demand or lack of urban infrastructure. The model under construction uses methods such as Currie’s Gap Assessment associated with the London’s Public Transport Access Level, and the Public Transport Accessibility Index developed by Saghapour. It is intended to present the stage of the thesis with the spatial/need gaps of the neighborhoods of Recife already detected. The benefits of the geographic information system are used in this paper. It should be noted that gaps are determined from the transport supply indices. In this case, considering the presence of walking catchment areas. Still in relation to the detection of gaps, the relevant demand index is also determined. This, in turn, is calculated through indicators that reflect social needs. With the use of the smaller Brazilian geographical unit, the census sector, the model with the inclusion of population density in the study areas should present more consolidated results. Based on the results achieved, an analysis of transportation disadvantage will be carried out as a factor of social exclusion in the study area. It is anticipated that the results obtained up to the present moment, already indicate a strong trend of public transportation in areas of higher income classes, leading to the understanding that the most disadvantaged population migrates to those neighborhoods in search of employment.Keywords: gap assessment, public transport supply, social exclusion, spatial gaps
Procedia PDF Downloads 18412240 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving
Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian
Abstract:
In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning
Procedia PDF Downloads 15112239 Investigating Trophic Relationships in Moroccan Marine Ecosystems: A Study of the Mediterranean and Atlantic Using Ecopath
Authors: Salma Aboussalam, Karima Khalil, Khalid Elkalay
Abstract:
An Ecopath model was employed to investigate the trophic structure, function, and current state of the Moroccan Mediterranean Sea ecosystem. The model incorporated 31 functional groups, including 21 fish species, 7 invertebrates, 2 primary producers, and a detritus group. The trophic interactions among these groups were analyzed, revealing an average trophic transfer efficiency of 23%. The results indicated that the ecosystem produced more energy than it consumed, with high respiration and consumption rates. Indicators of stability and development were low for the Finn cycle index (13.97), system omnivory index (0.18), and average Finn path length (3.09), indicating a disturbed ecosystem with a linear trophic structure. Keystone species were identified through the use of the keystone index and mixed trophic impact analysis, with demersal invertebrates, zooplankton, and cephalopods found to have a significant impact on other groups.Keywords: Ecopath, food web, trophic flux, Moroccan Mediterranean Sea
Procedia PDF Downloads 10512238 Implicit Eulerian Fluid-Structure Interaction Method for the Modeling of Highly Deformable Elastic Membranes
Authors: Aymen Laadhari, Gábor Székely
Abstract:
This paper is concerned with the development of a fully implicit and purely Eulerian fluid-structure interaction method tailored for the modeling of the large deformations of elastic membranes in a surrounding Newtonian fluid. We consider a simplified model for the mechanical properties of the membrane, in which the surface strain energy depends on the membrane stretching. The fully Eulerian description is based on the advection of a modified surface tension tensor, and the deformations of the membrane are tracked using a level set strategy. The resulting nonlinear problem is solved by a Newton-Raphson method, featuring a quadratic convergence behavior. A monolithic solver is implemented, and we report several numerical experiments aimed at model validation and illustrating the accuracy of the presented method. We show that stability is maintained for significantly larger time steps.Keywords: finite element method, implicit, level set, membrane, Newton method
Procedia PDF Downloads 30712237 Critical Success Factors (CSFS) in ERP Implementation at the PP Company: Management and Technology Perspectives
Authors: Eko Ganis Sukoharsono, Meivida Medyastanti
Abstract:
This study explores the Critical Success Factors (CSFs) for successful ERP implementation at the PP Company, a leading state-owned construction company in Indonesia. The study uses a qualitative - Postmodernist approach through an imaginary dialogue between a CEO and a Technologist to analyze ERP implementation from both managerial and technological perspectives. Key CSFs identified include strong support from top management, clear project scope and objectives, effective change management, employee engagement, data accuracy, and robust IT infrastructure. The study’s findings are synthesized into a CSF model that highlights the importance of aligning ERP systems with business objectives and emphasizes the need for continuous post-implementation support. This model provides a strategic framework that can guide other companies, particularly state-owned enterprises, in navigating ERP implementation, ensuring optimal return on investment, and enhancing organizational efficiency.Keywords: ERP, critical success factors, PT. PP, postmodernist paradigm, management, technology
Procedia PDF Downloads 812236 The Future of the Architect's Profession in France with the Emergence of Building Information Modelling
Authors: L. Mercier, D. Beladjine, K. Beddiar
Abstract:
The digital transition of building in France brings many changes which some have been able to face very quickly, while others are struggling to find their place and the interest that BIM can bring in their profession. BIM today is already adopted or initiated by construction professionals. However, this change, which can be drastic for some, prevents them from integrating it definitively. This is the case with architects. The profession is shared on the practice of BIM in its exercise. The risk of not adopting this new working method now and of not wanting to switch to its new digital tools leads us to question the future of the profession in view of the gap that is likely to be created within project management. In order to deal with the subject efficiently, our work was based on a documentary watch on BIM and then on the profession of architect, which allowed us to establish links on these two subjects. The observation of the economic model towards which the agencies tend and the trend of the sought after profiles made it possible to develop the opportunities and the brakes likely to impact the future of the profession of architect. The centralization of research directs work towards the conclusion that the model implemented by companies does not allow to integrate BIM within their structure. A solution hypothesis was then issued, focusing on the development of agencies through the diversity of profiles, skills to be integrated internally with the aim of diversifying their skills, and their business practices. In order to address this hypothesis of a multidisciplinary agency model, we conducted a survey of architectural firms. It is built on the model of Anglo-Saxon countries, which do not have the same functioning in comparison to the French model. The results obtained showed a risk of gradual disappearance on the market from small agencies in favor of those who will have and could take this BIM working method. This is why the architectural profession must, first of all, look at what is happening within its training before absolutely wanting to diversify the profiles to integrate into its structure. This directs the study on the training of architects. The schools of French architects are generally behind schedule if we allow the comparison to the schools of engineers. The latter is currently experiencing a slight improvement with the emergence of masters and BIM options during the university course. If the training of architects develops towards learning BIM and the agencies have the desire to integrate different but complementary profiles, then they will develop their skills internally and therefore open their profession to new functions. The place of BIM Management on projects will allow the architect to remain in control of the project because of their overall vision of the project. In addition, the integration of BIM and more generally of the life cycle analysis of the structure will make it possible to guarantee eco-design or eco-construction by approaching the constraints of sustainable development omnipresent on the planet.Keywords: building information modelling, BIM, BIM management, BIM manager, BIM architect
Procedia PDF Downloads 11412235 Volatility Transmission between Oil Price and Stock Return of Emerging and Developed Countries
Authors: Algia Hammami, Abdelfatteh Bouri
Abstract:
In this work, our objective is to study the transmission of volatility between oil and stock markets in developed (USA, Germany, Italy, France and Japan) and emerging countries (Tunisia, Thailand, Brazil, Argentina, and Jordan) for the period 1998-2015. Our methodology consists of analyzing the monthly data by the GARCH-BEKK model to capture the effect in terms of volatility in the variation of the oil price on the different stock market. The empirical results in the emerging countries indicate that the relationships are unidirectional from the stock market to the oil market. For the developed countries, we find that the transmission of volatility is unidirectional from the oil market to stock market. For the USA and Italy, we find no transmission between the two markets. The transmission is bi-directional only in Thailand. Following our estimates, we also noticed that the emerging countries influence almost the same extent as the developed countries, while at the transmission of volatility there a bid difference. The GARCH-BEKK model is more effective than the others versions to minimize the risk of an oil-stock portfolio.Keywords: GARCH, oil prices, stock market, volatility transmission
Procedia PDF Downloads 43912234 Evaluation of Neighbourhood Characteristics and Active Transport Mode Choice
Authors: Tayebeh Saghapour, Sara Moridpour, Russell George Thompson
Abstract:
One of the common aims of transport policy makers is to switch people’s travel to active transport. For this purpose, a variety of transport goals and investments should be programmed to increase the propensity towards active transport mode choice. This paper aims to investigate whether built environment features in neighbourhoods could enhance the odds of active transportation. The present study introduces an index measuring public transport accessibility (PTAI), and a walkability index along with socioeconomic variables to investigate mode choice behaviour. Using travel behaviour data, an ordered logit regression model is applied to examine the impacts of explanatory variables on walking trips. The findings indicated that high rates of active travel are consistently associated with higher levels of walking and public transport accessibility.Keywords: active transport, public transport accessibility, walkability, ordered logit model
Procedia PDF Downloads 35412233 Comparative Analysis of DTC Based Switched Reluctance Motor Drive Using Torque Equation and FEA Models
Authors: P. Srinivas, P. V. N. Prasad
Abstract:
Since torque ripple is the main cause of noise and vibrations, the performance of Switched Reluctance Motor (SRM) can be improved by minimizing its torque ripple using a novel control technique called Direct Torque Control (DTC). In DTC technique, torque is controlled directly through control of magnitude of the flux and change in speed of the stator flux vector. The flux and torque are maintained within set hysteresis bands. The DTC of SRM is analysed by two methods. In one of the methods, the actual torque is computed by conducting Finite Element Analysis (FEA) on the design specifications of the motor. In the other method, the torque is computed by Simplified Torque Equation. The variation of peak current, average current, torque ripple and speed settling time with Simplified Torque Equation model is compared with FEA based model.Keywords: direct toque control, simplified torque equation, finite element analysis, torque ripple
Procedia PDF Downloads 48012232 A Parametric Study on the Backwater Level Due to a Bridge Constriction
Authors: S. Atabay, T. A. Ali, Md. M. Mortula
Abstract:
This paper presents the results and findings from a parametric study on the water surface elevation at upstream of bridge constriction for subcritical flow. In this study, the influence of Manning's Roughness Coefficient of main channel (nmc) and of floodplain (nfp), and bridge opening (b) flow rate (Q), contraction (kcon), and expansion coefficients (kexp) were investigated on backwater level. The DECK bridge models with different span widths and without any pier were investigated within the two stage channel having various roughness conditions. One of the most commonly used commercial one-dimensional HEC-RAS model was used in this parametric study. This study showed that the effects of main channel roughness (nmc) and flow rate (Q) on the backwater level are much higher than those of the floodplain roughness (nfp). Bridge opening (b) with contraction (kcon) and expansion coefficients (kexp) have very little effect on the backwater level within this range of parameters.Keywords: bridge backwater, parametric study, waterways, HEC-RAS model
Procedia PDF Downloads 30812231 Data and Biological Sharing Platforms in Community Health Programs: Partnership with Rural Clinical School, University of New South Wales and Public Health Foundation of India
Authors: Vivian Isaac, A. T. Joteeshwaran, Craig McLachlan
Abstract:
The University of New South Wales (UNSW) Rural Clinical School has a strategic collaborative focus on chronic disease and public health. Our objectives are to understand rural environmental and biological interactions in vulnerable community populations. The UNSW Rural Clinical School translational model is a spoke and hub network. This spoke and hub model connects rural data and biological specimens with city based collaborative public health research networks. Similar spoke and hub models are prevalent across research centers in India. The Australia-India Council grant was awarded so we could establish sustainable public health and community research collaborations. As part of the collaborative network we are developing strategies around data and biological sharing platforms between Indian Institute of Public Health, Public Health Foundation of India (PHFI), Hyderabad and Rural Clinical School UNSW. The key objective is to understand how research collaborations are conducted in India and also how data can shared and tracked with external collaborators such as ourselves. A framework to improve data sharing for research collaborations, including DNA was proposed as a project outcome. The complexities of sharing biological data has been investigated via a visit to India. A flagship sustainable project between Rural Clinical School UNSW and PHFI would illustrate a model of data sharing platforms.Keywords: data sharing, collaboration, public health research, chronic disease
Procedia PDF Downloads 45212230 Portfolio Optimization under a Hybrid Stochastic Volatility and Constant Elasticity of Variance Model
Authors: Jai Heui Kim, Sotheara Veng
Abstract:
This paper studies the portfolio optimization problem for a pension fund under a hybrid model of stochastic volatility and constant elasticity of variance (CEV) using asymptotic analysis method. When the volatility component is fast mean-reverting, it is able to derive asymptotic approximations for the value function and the optimal strategy for general utility functions. Explicit solutions are given for the exponential and hyperbolic absolute risk aversion (HARA) utility functions. The study also shows that using the leading order optimal strategy results in the value function, not only up to the leading order, but also up to first order correction term. A practical strategy that does not depend on the unobservable volatility level is suggested. The result is an extension of the Merton's solution when stochastic volatility and elasticity of variance are considered simultaneously.Keywords: asymptotic analysis, constant elasticity of variance, portfolio optimization, stochastic optimal control, stochastic volatility
Procedia PDF Downloads 30012229 Wearable Antenna for Diagnosis of Parkinson’s Disease Using a Deep Learning Pipeline on Accelerated Hardware
Authors: Subham Ghosh, Banani Basu, Marami Das
Abstract:
Background: The development of compact, low-power antenna sensors has resulted in hardware restructuring, allowing for wireless ubiquitous sensing. The antenna sensors can create wireless body-area networks (WBAN) by linking various wireless nodes across the human body. WBAN and IoT applications, such as remote health and fitness monitoring and rehabilitation, are becoming increasingly important. In particular, Parkinson’s disease (PD), a common neurodegenerative disorder, presents clinical features that can be easily misdiagnosed. As a mobility disease, it may greatly benefit from the antenna’s nearfield approach with a variety of activities that can use WBAN and IoT technologies to increase diagnosis accuracy and patient monitoring. Methodology: This study investigates the feasibility of leveraging a single patch antenna mounted (using cloth) on the wrist dorsal to differentiate actual Parkinson's disease (PD) from false PD using a small hardware platform. The semi-flexible antenna operates at the 2.4 GHz ISM band and collects reflection coefficient (Γ) data from patients performing five exercises designed for the classification of PD and other disorders such as essential tremor (ET) or those physiological disorders caused by anxiety or stress. The obtained data is normalized and converted into 2-D representations using the Gabor wavelet transform (GWT). Data augmentation is then used to expand the dataset size. A lightweight deep-learning (DL) model is developed to run on the GPU-enabled NVIDIA Jetson Nano platform. The DL model processes the 2-D images for feature extraction and classification. Findings: The DL model was trained and tested on both the original and augmented datasets, thus doubling the dataset size. To ensure robustness, a 5-fold stratified cross-validation (5-FSCV) method was used. The proposed framework, utilizing a DL model with 1.356 million parameters on the NVIDIA Jetson Nano, achieved optimal performance in terms of accuracy of 88.64%, F1-score of 88.54, and recall of 90.46%, with a latency of 33 seconds per epoch.Keywords: antenna, deep-learning, GPU-hardware, Parkinson’s disease
Procedia PDF Downloads 1212228 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection
Authors: Mahshid Arabi
Abstract:
With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.Keywords: data protection, digital technologies, information security, modern management
Procedia PDF Downloads 3412227 The Power of Inferences and Assumptions: Using a Humanities Education Approach to Help Students Learn to Think Critically
Authors: Randall E. Osborne
Abstract:
A four-step ‘humanities’ thought model has been used in an interdisciplinary course for almost two decades and has been proven to aid in student abilities to become more inclusive in their world view. Lack of tolerance for ambiguity can interfere with this progression so we developed an assignment that seems to have assisted students in developing more tolerance for ambiguity and, therefore, opened them up to make more progress on the critical thought model. A four-step critical thought model (built from a humanities education approach) is used in an interdisciplinary course on prejudice, discrimination, and hate in an effort to minimize egocentrism and promote sociocentrism in college students. A fundamental barrier to this progression is a lack of tolerance for ambiguity. The approach to the course is built on the assumption that Tolerance for Ambiguity (characterized by a dislike of uncertain, ambiguous or situations in which expected behaviors are uncertain, will like serve as a barrier (if tolerance is low) or facilitator (if tolerance is high) of active ‘engagement’ with assignments. Given that active engagement with course assignments would be necessary to promote an increase in critical thought and the degree of multicultural attitude change, tolerance for ambiguity inhibits critical thinking and, ultimately multicultural attitude change. As expected, those students showing the least amount of decrease (or even an increase) in intolerance across the semester, earned lower grades in the course than those students who showed a significant decrease in intolerance, t(1,19) = 4.659, p < .001. Students who demonstrated the most change in their Tolerance for Ambiguity (showed an increasing ability to tolerate ambiguity) earned the highest grades in the course. This is, especially, significant because faculty did not know student scores on this measure until after all assignments had been graded and course grades assigned. An assignment designed to assist students in making their assumption and inferences processes visible so they could be explored, was implemented with the goal of this exploration then promoting more tolerance for ambiguity, which, as already outlined, promotes critical thought. The assignment offers students two options and then requires them to explore what they have learned about inferences and/or assumptions This presentation outlines the assignment and demonstrates the humanities model, what students learn from particular assignments and how it fosters a change in Tolerance for Ambiguity which, serves as the foundational component of critical thinking.Keywords: critical thinking, humanities education, sociocentrism, tolerance for ambiguity
Procedia PDF Downloads 27412226 Digital Elevation Model Analysis of Potential Prone Flood Disaster Watershed Citarum Headwaters Bandung
Authors: Faizin Mulia Rizkika, Iqbal Jabbari Mufti, Muhammad R. Y. Nugraha, Fadil Maulidir Sube
Abstract:
Flooding is an event of ponding on the flat area around the river as a result of the overflow of river water was not able to be accommodated by the river and may cause damage to the infrastructure of a region. This study aimed to analyze the data of Digital Elevation Model (DEM) for information that plays a role in the mapping of zones prone to flooding, mapping the distribution of zones prone to flooding that occurred in the Citarum upstream using secondary data and software (ArcGIS, MapInfo), this assessment was made distribution map of flooding, there were 13 counties / districts dam flood-prone areas in Bandung, and the most vulnerable districts are areas Baleendah-Dayeuhkolot-Bojongsoang-Banjaran. The area has a low slope and the same limits with boundary rivers and areas that have excessive land use, so the water catchment area is reduced.Keywords: mitigation, flood, citarum, DEM
Procedia PDF Downloads 39212225 An Efficient Activated Carbon for Copper (II) Adsorption Synthesized from Indian Gooseberry Seed Shells
Authors: Somen Mondal, Subrata Kumar Majumder
Abstract:
Removal of metal pollutants by efficient activated carbon is challenging research in the present-day scenario. In the present study, the characteristic features of an efficient activated carbon (AC) synthesized from Indian gooseberry seed shells for the copper (II) adsorption are reported. A three-step chemical activation method consisting of the impregnation, carbonization and subsequent activation is used to produce the activated carbon. The copper adsorption kinetics and isotherms onto the activated carbon were analyzed. As per present investigation, Indian gooseberry seed shells showed the BET surface area of 1359 m²/g. The maximum adsorptivity of the activated carbon at a pH value of 9.52 was found to be 44.84 mg/g at 30°C. The adsorption process followed the pseudo-second-order kinetic model along with the Langmuir adsorption isotherm. This AC could be used as a favorable and cost-effective copper (II) adsorbent in wastewater treatment to remove the metal contaminants.Keywords: activated carbon, adsorption isotherm, kinetic model, characterization
Procedia PDF Downloads 16412224 Co-Culture of Neonate Mouse Spermatogonial Stem Cells with Sertoli Cells: Inductive Role of Melatonin following Transplantation: Adult Azoospermia Mouse Model
Authors: Mehdi Abbasi, Shadan Navid, Mohammad Pourahmadi, M. Majidi Zolbin
Abstract:
We have recently reported that melatonin as antioxidant enhances the efficacy of colonization of spermatogonial stem cells (SSCs). Melatonin as an antioxidant plays a vital role in the development of SSCs in vitro. This study aimed to investigate evaluation of sertoli cells and melatonin simultaneously on SSC proliferation following transplantation to testis of adult mouse busulfan-treated azoospermia model. SSCs and sertoli cells were isolated from the testes of three to six-day old male mice.To determine the purity, Flow cytometry technique using PLZF antibody were evaluated. Isolated testicular cells were cultured in αMEM medium in the absence (control group) or presence (experimental group) of sertoli cells and melatonin extract for 2 weeks. We then transplanted SSCs by injection into the azoospermia mice model. Higher viability, proliferation, and Id4, Plzf, expression were observed in the presence of simultaneous sertoli cells and melatonin in vitro. Moreover, immunocytochemistry results showed higher Oct4 expression in this group. Eight weeks after transplantation, injected cells were localized at the base of seminiferous tubules in the recipient testes. The number of spermatogonia and the weight of testis were higher in the experimental group relative to control group. The results of our study suggest that this new protocol can increase the transplantation of these cells can be useful in the treatment of male infertility.Keywords: colonization, melatonin, spermatogonial stem cell, transplantation
Procedia PDF Downloads 17012223 Developing a Model to Objectively Assess the Culture of Individuals and Teams in Order to Effectively and Efficiently Achieve Sustainability in the Manpower
Authors: Ahmed Mohamed Elnady Mohamed Elsafty
Abstract:
This paper explains a developed applied objective model to measure the culture qualitatively and quantitatively, whether in individuals or in teams, in order to be able to use culture correctly or modify it efficiently. This model provides precise measurements and consistent interpretations by being comprehensive, updateable, and protected from being misled by imitations. Methodically, the provided model divides the culture into seven dimensions (total 43 cultural factors): First dimension is outcome-orientation which consists of five factors and should be highest in leaders. Second dimension is details-orientation which consists of eight factors and should be in highest intelligence members. Third dimension is team-orientation which consists of five factors and should be highest in instructors or coaches. Fourth dimension is change-orientation which consists of five factors and should be highest in soldiers. Fifth dimension is people-orientation which consists of eight factors and should be highest in media members. Sixth dimension is masculinity which consists of seven factors and should be highest in hard workers. Last dimension is stability which consists of seven factors and should be highest in soft workers. In this paper, the details of all cultural factors are explained. Practically, information collection about each cultural factor in the targeted person or team is essential in order to calculate the degrees of all cultural factors using the suggested equation of multiplying 'the score of factor presence' by 'the score of factor strength'. In this paper, the details of how to build each score are explained. Based on the highest degrees - to identify which cultural dimension is the prominent - choosing the tested individual or team in the supposedly right position at the right time will provide a chance to use minimal efforts to make everyone aligned to the organization’s objectives. In other words, making everyone self-motivated by setting him/her at the right source of motivation is the most effective and efficient method to achieve high levels of competency, commitment, and sustainability. Modifying a team culture can be achieved by excluding or including new members with relatively high or low degrees in specific cultural factors. For conclusion, culture is considered as the software of the human beings and it is one of the major compression factors on the managerial discretion. It represents the behaviors, attitudes, and motivations of the human resources which are vital to enhance quality and safety, expanding the market share, and defending against attacks from external environments. Thus, it is tremendously essential and useful to use such a comprehensive model to measure, use, and modify culture.Keywords: culture dimensions, culture factors, culture measurement, cultural analysis, cultural modification, self-motivation, alignment to objectives, competency, sustainability
Procedia PDF Downloads 16612222 Multidimensional Modeling of Solidification Process of Multi-Crystalline Silicon under Magnetic Field for Solar Cell Technology
Authors: Mouhamadou Diop, Mohamed I. Hassan
Abstract:
Molten metallic flow in metallurgical plant is highly turbulent and presents a complex coupling with heat transfer, phase transfer, chemical reaction, momentum transport, etc. Molten silicon flow has significant effect in directional solidification of multicrystalline silicon by affecting the temperature field and the emerging crystallization interface as well as the transport of species and impurities during casting process. Owing to the complexity and limits of reliable measuring techniques, computational models of fluid flow are useful tools to study and quantify these problems. The overall objective of this study is to investigate the potential of a traveling magnetic field for an efficient operating control of the molten metal flow. A multidimensional numerical model will be developed for the calculations of Lorentz force, molten metal flow, and the related phenomenon. The numerical model is implemented in a laboratory-scale silicon crystallization furnace. This study presents the potential of traveling magnetic field approach for an efficient operating control of the molten flow. A numerical model will be used to study the effects of magnetic force applied on the molten flow, and their interdependencies. In this paper, coupled and decoupled, steady and unsteady models of molten flow and crystallization interface will be compared. This study will allow us to retrieve the optimal traveling magnetic field parameter range for crystallization furnaces and the optimal numerical simulations strategy for industrial application.Keywords: multidimensional, numerical simulation, solidification, multicrystalline, traveling magnetic field
Procedia PDF Downloads 246