Search results for: winkler model (beam on elastic foundation)
12343 Effect of Runup over a Vertical Pile Supported Caisson Breakwater and Quarter Circle Pile Supported Caisson Breakwater
Authors: T. J. Jemi Jeya, V. Sriram
Abstract:
Pile Supported Caisson breakwater is an ecofriendly breakwater very useful in coastal zone protection. The model is developed by considering the advantages of both caisson breakwater and pile supported breakwater, where the top portion is a vertical or quarter circle caisson and the bottom portion consists of a pile supported breakwater defined as Vertical Pile Supported Breakwater (VPSCB) and Quarter-circle Pile Supported Breakwater (QPSCB). The study mainly focuses on comparison of run up over VPSCB and QPSCB under oblique waves. The experiments are carried out in a shallow wave basin under different water depths (d = 0.5 m & 0.55 m) and under different oblique regular waves (00, 150, 300). The run up over the surface is measured by placing two run up probes over the surface at 0.3 m on both sides from the centre of the model. The results show that the non-dimensional shoreward run up shows slight decrease with respect to increase in angle of wave attack.Keywords: Caisson breakwater, pile supported breakwater, quarter circle breakwater, vertical breakwater
Procedia PDF Downloads 15612342 Using Groundwater Modeling System to Create a 3-D Groundwater Flow and Solute Transport Model for a Semiarid Region: A Case Study of the Nadhour Saouaf Sisseb El Alem Aquifer, Central Tunisia
Authors: Emna Bahri Hammami, Zammouri Mounira, Tarhouni Jamila
Abstract:
The Nadhour Saouaf Sisseb El Alem (NSSA) system comprises some of the most intensively exploited aquifers in central Tunisia. Since the 1970s, the growth in economic productivity linked to intensive agriculture in this semiarid region has been sustained by increasing pumping rates of the system’s groundwater. Exploitation of these aquifers has increased rapidly, ultimately causing their depletion. With the aim to better understand the behavior of the aquifer system and to predict its evolution, the paper presents a finite difference model of the groundwater flow and solute transport. The model is based on the Groundwater Modeling System (GMS) and was calibrated using data from 1970 to 2010. Groundwater levels observed in 1970 were used for the steady-state calibration. Groundwater levels observed from 1971 to 2010 served to calibrate the transient state. The impact of pumping discharge on the evolution of groundwater levels was studied through three hypothetical pumping scenarios. The first two scenarios replicated the approximate drawdown in the aquifer heads (about 17 m in scenario 1 and 23 m in scenario 2 in the center of NSSA) following an increase in pumping rates by 30% and 50% from their current values, respectively. In addition, pumping was stopped in the third scenario, which could increase groundwater reserves by about 7 Mm3/year. NSSA groundwater reserves could be improved considerably if the pumping rules were taken seriously.Keywords: pumping, depletion, groundwater modeling system GMS, Nadhour Saouaf
Procedia PDF Downloads 22412341 Meta-analysis of Technology Acceptance for Mobile and Digital Libraries in Academic Settings
Authors: Nosheen Fatima Warraich
Abstract:
One of the most often used models in information system (IS) research is the technology acceptance model (TAM). This meta-analysis aims to measure the relationship between TAM variables, Perceived Ease of Use (PEOU), and Perceived Usefulness (PU) with users’ attitudes and behavioral intention (BI) in mobile and digital libraries context. It also examines the relationship of external variables (information quality and system quality) with TAM variables (PEOU and PU) in digital libraries settings. This meta-analysis was performed through PRISMA-P guidelines. Four databases (Google Scholar, Web of Science, Scopus, and LISTA) were utilized for searching, and the search was conducted according to defined criteria. The findings of this study revealed a large effect size of PU and PEOU with BI. There was also a large effect size of PU and PEOU with attitude. A medium effect size was found between SysQ -> PU, InfoQ-> PU, and SysQ -> PEOU. However, there was a small effect size between InfoQ and PEOU. It fills the literature gap and also confirms that TAM is a valid model for the acceptance and use of technology in mobile and digital libraries context. Thus, its findings would be helpful for developers and designers in designing and developing mobile library apps. It will also be beneficial for library authorities and system librarians in designing and developing digital libraries in academic settings.Keywords: technology acceptance model (tam), perceived ease of use, perceived usefulness, information quality, system quality, meta-analysis, systematic review, digital libraries, and mobile library apps.
Procedia PDF Downloads 8012340 Effect of Three Desensitizers on Dentinal Tubule Occlusion and Bond Strength of Dentin Adhesives
Authors: Zou Xuan, Liu Hongchen
Abstract:
The ideal dentin desensitizing agent should not only have good biological safety, simple clinical operation mode, the superior treatment effect, but also should have a durable effect to resist the oral environmental temperature change and oral mechanical abrasion, so as to achieve a persistent desensitization effect. Also, when using desensitizing agent to prevent the post-operative hypersensitivity, we should not only prevent it from affecting crowns’ retention, but must understand its effects on bond strength of dentin adhesives. There are various of desensitizers and dentin adhesives in clinical treatment. They have different chemical or physical properties. Whether the use of desensitizing agent would affect the bond strength of dentin adhesives still need further research. In this in vitro study, we built the hypersensitive dentin model and post-operative dentin model, to evaluate the sealing effects and durability on exposed tubule by three different dentin desensitizers and to evaluate the sealing effects and the bond strength of dentin adhesives after using three different dentin desensitizers on post-operative dentin. The result of this study could provide some important references for clinical use of dentin desensitizing agent. 1. As to the three desensitizers, the hypersensitive dentin model was built to evaluate their sealing effects on exposed tubule by SEM observation and dentin permeability analysis. All of them could significantly reduce the dentin permeability. 2. Test specimens of three groups treated by desensitizers were subjected to aging treatment with 5000 times thermal cycling and toothbrush abrasion, and then dentin permeability was measured to evaluate the sealing durability of these three desensitizers on exposed tubule. The sealing durability of three groups were different. 3. The post-operative dentin model was built to evaluate the sealing effects of the three desensitizers on post-operative dentin by SEM and methylene blue. All of three desensitizers could reduce the dentin permeability significantly. 4. The influences of three desensitizers on the bonding efficiency of total-etch and self-etch adhesives were evaluated with the micro-tensile bond strength study and bond interface morphology observation. The dentin bond strength for Green or group was significantly lower than the other two groups (P<0.05).Keywords: dentin, desensitizer, dentin permeability, thermal cycling, micro-tensile bond strength
Procedia PDF Downloads 39712339 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction
Abstract:
Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.
Procedia PDF Downloads 9412338 Estimate Robert Gordon University's Scope Three Emissions by Nearest Neighbor Analysis
Authors: Nayak Amar, Turner Naomi, Gobina Edward
Abstract:
The Scottish Higher Education Institutes must report their scope 1 & 2 emissions, whereas reporting scope 3 is optional. Scope 3 is indirect emissions which embodies a significant component of total carbon footprint and therefore it is important to record, measure and report it accurately. Robert Gordon University (RGU) reported only business travel, grid transmission and distribution, water supply and transport, and recycling scope 3 emissions. This study estimates the RGUs total scope 3 emissions by comparing it with a similar HEI in scale. The scope 3 emission reporting of sixteen Scottish HEI was studied. Glasgow Caledonian University was identified as the nearest neighbour by comparing its students' full time equivalent, staff full time equivalent, research-teaching split, budget, and foundation year. Apart from the peer, data was also collected from the Higher Education Statistics Agency database. RGU reported emissions from business travel, grid transmission and distribution, water supply, and transport and recycling. This study estimated RGUs scope 3 emissions from procurement, student-staff commute, and international student trip. The result showed that RGU covered only 11% of the scope 3 emissions. The major contributor to scope 3 emissions were procurement (48%), student commute (21%), international student trip (16%), and staff commute (4%). The estimated scope 3 emission was more than 14 times the reported emissions. This study has shown the relative importance of each scope 3 emissions source, which gives a guideline for the HEIs, on where to focus their attention to capture maximum scope 3 emissions. Moreover, it has demonstrated that it is possible to estimate the scope 3 emissions with limited data.Keywords: HEI, university, emission calculations, scope 3 emissions, emissions reporting
Procedia PDF Downloads 10312337 Approach for the Mathematical Calculation of the Damping Factor of Railway Bridges with Ballasted Track
Authors: Andreas Stollwitzer, Lara Bettinelli, Josef Fink
Abstract:
The expansion of the high-speed rail network over the past decades has resulted in new challenges for engineers, including traffic-induced resonance vibrations of railway bridges. Excessive resonance-induced speed-dependent accelerations of railway bridges during high-speed traffic can lead to negative consequences such as fatigue symptoms, distortion of the track, destabilisation of the ballast bed, and potentially even derailment. A realistic prognosis of bridge vibrations during high-speed traffic must not only rely on the right choice of an adequate calculation model for both bridge and train but first and foremost on the use of dynamic model parameters which reflect reality appropriately. However, comparisons between measured and calculated bridge vibrations are often characterised by considerable discrepancies, whereas dynamic calculations overestimate the actual responses and therefore lead to uneconomical results. This gap between measurement and calculation constitutes a complex research issue and can be traced to several causes. One major cause is found in the dynamic properties of the ballasted track, more specifically in the persisting, substantial uncertainties regarding the consideration of the ballasted track (mechanical model and input parameters) in dynamic calculations. Furthermore, the discrepancy is particularly pronounced concerning the damping values of the bridge, as conservative values have to be used in the calculations due to normative specifications and lack of knowledge. By using a large-scale test facility, the analysis of the dynamic behaviour of ballasted track has been a major research topic at the Institute of Structural Engineering/Steel Construction at TU Wien in recent years. This highly specialised test facility is designed for isolated research of the ballasted track's dynamic stiffness and damping properties – independent of the bearing structure. Several mechanical models for the ballasted track consisting of one or more continuous spring-damper elements were developed based on the knowledge gained. These mechanical models can subsequently be integrated into bridge models for dynamic calculations. Furthermore, based on measurements at the test facility, model-dependent stiffness and damping parameters were determined for these mechanical models. As a result, realistic mechanical models of the railway bridge with different levels of detail and sufficiently precise characteristic values are available for bridge engineers. Besides that, this contribution also presents another practical application of such a bridge model: Based on the bridge model, determination equations for the damping factor (as Lehr's damping factor) can be derived. This approach constitutes a first-time method that makes the damping factor of a railway bridge calculable. A comparison of this mathematical approach with measured dynamic parameters of existing railway bridges illustrates, on the one hand, the apparent deviation between normatively prescribed and in-situ measured damping factors. On the other hand, it is also shown that a new approach, which makes it possible to calculate the damping factor, provides results that are close to reality and thus raises potentials for minimising the discrepancy between measurement and calculation.Keywords: ballasted track, bridge dynamics, damping, model design, railway bridges
Procedia PDF Downloads 16812336 Pricing European Continuous-Installment Options under Regime-Switching Models
Authors: Saghar Heidari
Abstract:
In this paper, we study the valuation problem of European continuous-installment options under Markov-modulated models with a partial differential equation approach. Due to the opportunity for continuing or stopping to pay installments, the valuation problem under regime-switching models can be formulated as coupled partial differential equations (CPDE) with free boundary features. To value the installment options, we express the truncated CPDE as a linear complementarity problem (LCP), then a finite element method is proposed to solve the resulted variational inequality. Under some appropriate assumptions, we establish the stability of the method and illustrate some numerical results to examine the rate of convergence and accuracy of the proposed method for the pricing problem under the regime-switching model.Keywords: continuous-installment option, European option, regime-switching model, finite element method
Procedia PDF Downloads 14212335 The Performance Improvement of Solar Aided Power Generation System by Introducing the Second Solar Field
Authors: Junjie Wu, Hongjuan Hou, Eric Hu, Yongping Yang
Abstract:
Solar aided power generation (SAPG) technology has been proven as an efficient way to make use of solar energy for power generation purpose. In an SAPG plant, a solar field consisting of parabolic solar collectors is normally used to supply the solar heat in order to displace the high pressure/temperature extraction steam. To understand the performance of such a SAPG plant, a new simulation model was developed by the authors recently, in which the boiler was treated, as a series of heat exchangers unlike other previous models. Through the simulations using the new model, it was found the outlet properties of reheated steam, e.g. temperature, would decrease due to the introduction of the solar heat. The changes make the (lower stage) turbines work under off-design condition. As a result, the whole plant’s performance may not be optimal. In this paper, the second solar filed was proposed to increase the inlet temperature of steam to be reheated, in order to bring the outlet temperature of reheated steam back to the designed condition. A 600MW SAPG plant was simulated as a case study using the new model to understand the impact of the second solar field on the plant performance. It was found in the study, the 2nd solar field would improve the plant’s performance in terms of cycle efficiency and solar-to-electricity efficiency by 1.91% and 6.01%. The solar-generated electricity produced by per aperture area under the design condition was 187.96W/m2, which was 26.14% higher than the previous design.Keywords: solar-aided power generation system, off-design performance, coal-saving performance, boiler modelling, integration schemes
Procedia PDF Downloads 29312334 An Interdisciplinary Maturity Model for Accompanying Sustainable Digital Transformation Processes in a Smart Residential Quarter
Authors: Wesley Preßler, Lucie Schmidt
Abstract:
Digital transformation is playing an increasingly important role in the development of smart residential quarters. In order to accompany and steer this process and ultimately make the success of the transformation efforts measurable, it is helpful to use an appropriate maturity model. However, conventional maturity models for digital transformation focus primarily on the evaluation of processes and neglect the information and power imbalances between the stakeholders, which affects the validity of the results. The Multi-Generation Smart Community (mGeSCo) research project is developing an interdisciplinary maturity model that integrates the dimensions of digital literacy, interpretive patterns, and technology acceptance to address this gap. As part of the mGeSCo project, the technological development of selected dimensions in the Smart Quarter Jena-Lobeda (Germany) is being investigated. A specific maturity model, based on Cohen's Smart Cities Wheel, evaluates the central dimensions Working, Living, Housing and Caring. To improve the reliability and relevance of the maturity assessment, the factors Digital Literacy, Interpretive Patterns and Technology Acceptance are integrated into the developed model. The digital literacy dimension examines stakeholders' skills in using digital technologies, which influence their perception and assessment of technological maturity. Digital literacy is measured by means of surveys, interviews, and participant observation, using the European Commission's Digital Literacy Framework (DigComp) as a basis. Interpretations of digital technologies provide information about how individuals perceive technologies and ascribe meaning to them. However, these are not mere assessments, prejudices, or stereotyped perceptions but collective patterns, rules, attributions of meaning and the cultural repertoire that leads to these opinions and attitudes. Understanding these interpretations helps in assessing the overarching readiness of stakeholders to digitally transform a/their neighborhood. This involves examining people's attitudes, beliefs, and values about technology adoption, as well as their perceptions of the benefits and risks associated with digital tools. These insights provide important data for a holistic view and inform the steps needed to prepare individuals in the neighborhood for a digital transformation. Technology acceptance is another crucial factor for successful digital transformation to examine the willingness of individuals to adopt and use new technologies. Surveys or questionnaires based on Davis' Technology Acceptance Model can be used to complement interpretive patterns to measure neighborhood acceptance of digital technologies. Integrating the dimensions of digital literacy, interpretive patterns and technology acceptance enables the development of a roadmap with clear prerequisites for initiating a digital transformation process in the neighborhood. During the process, maturity is measured at different points in time and compared with changes in the aforementioned dimensions to ensure sustainable transformation. Participation, co-creation, and co-production are essential concepts for a successful and inclusive digital transformation in the neighborhood context. This interdisciplinary maturity model helps to improve the assessment and monitoring of sustainable digital transformation processes in smart residential quarters. It enables a more comprehensive recording of the factors that influence the success of such processes and supports the development of targeted measures to promote digital transformation in the neighborhood context.Keywords: digital transformation, interdisciplinary, maturity model, neighborhood
Procedia PDF Downloads 8412333 A Critical Discourse Analysis of Jamaican and Trinidadian News Articles about D/Deafness
Authors: Melissa Angus Baboun
Abstract:
Utilizing a Critical Discourse Analysis (CDA) methodology and a theoretical framework based on disability studies, how Jamaican and Trinidadian newspapers discussed issues relating to the Deaf community were examined. The term deaf was inputted into the search engine tool of the online website for the Jamaica Observer and the Trinidad & Tobago Guardian. All 27 articles that contained the term deaf in its content and were written between August 1, 2017 and November 15, 2017 were chosen for the study. The data analysis was divided into three steps: (1) listing and analysis instances of metaphorical deafness (e.g. fall on deaf ears), (2) categorization of the content of the articles into the models of disability discourse (the medical, socio-cultural, and superscrip models of disability narratives), and (3) the analysis of any additional data found. A total of 42% of the articles pulled for this study did not deal with the Deaf community in any capacity, but rather instances of the use of idiomatic expressions that use deafness as a metaphor for a non-physical, undesirable trait. The most common idiomatic expression found was fall on deaf ears. Regarding the models of disability discourse, eight articles were found to follow the socio-cultural model, two were found to follow the medical model, and two were found to follow the superscrip model. The additional data found in these articles include two instances of the term deaf and mute, an overwhelming use of lower case d for the term deaf, and the misuse of the term translator (to mean interpreter).Keywords: deafness, disability, news coverage, Caribbean newspapers
Procedia PDF Downloads 23612332 Theoretical Approach for Estimating Transfer Length of Prestressing Strand in Pretensioned Concrete Members
Authors: Sun-Jin Han, Deuck Hang Lee, Hyo-Eun Joo, Hyun Kang, Kang Su Kim
Abstract:
In pretensioned concrete members, the transfer length region is existed, in which the stress in prestressing strand is developed due to the bond mechanism with surrounding concrete. The stress of strands in the transfer length zone is smaller than that in the strain plateau zone, so-called effective prestress, therefore the web-shear strength in transfer length region is smaller than that in the strain plateau zone. Although the transfer length is main key factor in the shear design, a few analytical researches have been conducted to investigate the transfer length. Therefore, in this study, a theoretical approach was used to estimate the transfer length. The bond stress developed between the strands and the surrounding concrete was quantitatively calculated by using the Thick-Walled Cylinder Model (TWCM), based on this, the transfer length of strands was calculated. To verify the proposed model, a total of 209 test results were collected from the previous studies. Consequently, the analysis results showed that the main influencing factors on the transfer length are the compressive strength of concrete, the cover thickness of concrete, the diameter of prestressing strand, and the magnitude of initial prestress. In addition, the proposed model predicted the transfer length of collected test specimens with high accuracy. Acknowledgement: This research was supported by a grant(17TBIP-C125047-01) from Technology Business Innovation Program funded by Ministry of Land, Infrastructure and Transport of Korean government.Keywords: bond, Hoyer effect, prestressed concrete, prestressing strand, transfer length
Procedia PDF Downloads 30112331 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction
Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite
Abstract:
Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination
Procedia PDF Downloads 12912330 The Influence of the Diameter of the Flow Conducts on the Rheological Behavior of a Non-Newtonian Fluid
Authors: Hacina Abchiche, Mounir Mellal, Imene Bouchelkia
Abstract:
The knowledge of the rheological behavior of the used products in different fields is essential, both in digital simulation and the understanding of phenomenon involved during the flow of these products. The fluids presenting a nonlinear behavior represent an important category of materials used in the process of food-processing, chemical, pharmaceutical and oil industries. The issue is that the rheological characterization by classical rheometer cannot simulate, or take into consideration, the different parameters affecting the characterization of a complex fluid flow during real-time. The main objective of this study is to investigate the influence of the diameter of the flow conducts or pipe on the rheological behavior of a non-Newtonian fluid and Propose a mathematical model linking the rheologic parameters and the diameter of the conduits of flow. For this purpose, we have developed an experimental system based on the principal of a capillary rheometer.Keywords: rhéologie, non-Newtonian fluids, experimental stady, mathematical model, cylindrical conducts
Procedia PDF Downloads 29412329 Investigation of the Progressive Collapse Potential in Steel Buildings with Composite Floor System
Authors: Pouya Kaafi, Gholamreza Ghodrati Amiri
Abstract:
Abnormal loads due to natural events, implementation errors and some other issues can lead to occurrence of progressive collapse in structures. Most of the past researches consist of 2- Dimensional (2D) models of steel frames without consideration of the floor system effects, which reduces the accuracy of the modeling. While employing a 3-Dimensional (3D) model and modeling the concrete slab system for the floors have a crucial role in the progressive collapse evaluation. In this research, a 3D finite element model of a 5-story steel building is modeled by the ABAQUS software once with modeling the slabs, and the next time without considering them. Then, the progressive collapse potential is evaluated. The results of the analyses indicate that the lack of the consideration of the slabs during the analyses, can lead to inaccuracy in assessing the progressive failure potential of the structure.Keywords: abnormal loads, composite floor system, intermediate steel moment resisting frame system, progressive collapse
Procedia PDF Downloads 45912328 A Human Centered Design of an Exoskeleton Using Multibody Simulation
Authors: Sebastian Kölbl, Thomas Reitmaier, Mathias Hartmann
Abstract:
Trial and error approaches to adapt wearable support structures to human physiology are time consuming and elaborate. However, during preliminary design, the focus lies on understanding the interaction between exoskeleton and the human body in terms of forces and moments, namely body mechanics. For the study at hand, a multi-body simulation approach has been enhanced to evaluate actual forces and moments in a human dummy model with and without a digital mock-up of an active exoskeleton. Therefore, different motion data have been gathered and processed to perform a musculosceletal analysis. The motion data are ground reaction forces, electromyography data (EMG) and human motion data recorded with a marker-based motion capture system. Based on the experimental data, the response of the human dummy model has been calibrated. Subsequently, the scalable human dummy model, in conjunction with the motion data, is connected with the exoskeleton structure. The results of the human-machine interaction (HMI) simulation platform are in particular resulting contact forces and human joint forces to compare with admissible values with regard to the human physiology. Furthermore, it provides feedback for the sizing of the exoskeleton structure in terms of resulting interface forces (stress justification) and the effect of its compliance. A stepwise approach for the setup and validation of the modeling strategy is presented and the potential for a more time and cost-effective development of wearable support structures is outlined.Keywords: assistive devices, ergonomic design, inverse dynamics, inverse kinematics, multibody simulation
Procedia PDF Downloads 16812327 Pattern of Stress Distribution in Different Ligature-Wire-Brackets Systems: A FE and Experimental Analysis
Authors: Afef Dridi, Salah Mezlini
Abstract:
Since experimental devices cannot calculate stress and deformation of complex structures. The Finite Element Method FEM has been widely used in several fields of research. One of these fields is orthodontics. The advantage of using such a method is the use of an accurate and non invasive method that allows us to have a sufficient data about the physiological reactions can happening in soft tissues. Most of researches done in this field were interested in the study of stresses and deformations induced by orthodontic apparatus in soft tissues (alveolar tissues). Only few studies were interested in the distribution of stress and strain in the orthodontic brackets. These studies, although they tried to be as close as possible to real conditions, their models did not reproduce the clinical cases. For this reason, the model generated by our research is the closest one to reality. In this study, a numerical model was developed to explore the stress and strain distribution under the application of real conditions. A comparison between different material properties was also done.Keywords: visco-hyperelasticity, FEM, orthodontic treatment, inverse method
Procedia PDF Downloads 26112326 Expanding the Evaluation Criteria for a Wind Turbine Performance
Authors: Ivan Balachin, Geanette Polanco, Jiang Xingliang, Hu Qin
Abstract:
The problem of global warming raised up interest towards renewable energy sources. To reduce cost of wind energy is a challenge. Before building of wind park conditions such as: average wind speed, direction, time for each wind, probability of icing, must be considered in the design phase. Operation values used on the setting of control systems also will depend on mentioned variables. Here it is proposed a procedure to be include in the evaluation of the performance of a wind turbine, based on the amplitude of wind changes, the number of changes and their duration. A generic study case based on actual data is presented. Data analysing techniques were applied to model the power required for yaw system based on amplitude and data amount of wind changes. A theoretical model between time, amplitude of wind changes and angular speed of nacelle rotation was identified.Keywords: field data processing, regression determination, wind turbine performance, wind turbine placing, yaw system losses
Procedia PDF Downloads 39412325 Height of Highway Embankment for Tolerable Residual Settlement of Loose Cohesionless Subsoil Overlain by Stronger Soil
Authors: Sharifullah Ahmed
Abstract:
Residual settlement of cohesionless or non-plastic soil of different strength underlying highway embankment overlain by stronger soil layer highway embankment is studied. A parametric study is carried out for different height of embankment and for different ESAL factor. The sum of elastic settlements of cohesionless subsoil due to axle induced stress and due to self-weight of pavement layers is termed as the residual settlement. The values of residual settlement (Sr) for different heights of road embankment (He) are obtained and presented as design charts for different SPT Value (N60) and ESAL factor. For rigid pavement and flexible pavement in approach to bridge or culvert, the tolerable residual settlement is 0.100m. This limit is taken as 0.200m for flexible pavement in general sections of highway without approach to bridge or culvert. A simplified guideline is developed for design of highway embankment underlain by very loose to loose cohesionless subsoil overlain by a stronger soil layer for limiting value of the residual settlement. In the current research study range of ESAL factor is 1-10 and range of SPT value (N60) is 1-10. That is found that, ground improvement is not required if the overlying stronger layer is minimum 1.5m and 4.0m for general road section of flexible pavement except bridge or culvert approach and for rigid pavement or flexible pavement in bridge or culvert approach. Tables and charts are included in the prepared guideline to obtain minimum allowable height of highway embankment to limit the residual settlement with in mentioned tolerable limit. Allowable values of the embankment height (He) are obtained corresponding to tolerable or limiting level of the residual settlement of loose subsoil for different SPT value, thickness of stronger layer (d) and ESAL factor. The developed guideline is may be issued to be used in assessment of the necessity of ground improvement in case of cohesionless subsoil underlying highway embankment overlain by stronger subsoil layer for limiting residual settlement. The ground improvement is only to be required if the residual settlement of subsoil is more than tolerable limit.Keywords: axle pressure, equivalent single axle load, ground improvement, highway embankment, tolerable residual settlement
Procedia PDF Downloads 13812324 Recycling Service Strategy by Considering Demand-Supply Interaction
Authors: Hui-Chieh Li
Abstract:
Circular economy promotes greater resource productivity and avoids pollution through greater recycling and re-use which bring benefits for both the environment and the economy. The concept is contrast to a linear economy which is ‘take, make, dispose’ model of production. A well-design reverse logistics service strategy could enhance the willingness of recycling of the users and reduce the related logistics cost as well as carbon emissions. Moreover, the recycle brings the manufacturers most advantages as it targets components for closed-loop reuse, essentially converting materials and components from worn-out product into inputs for new ones at right time and right place. This study considers demand-supply interaction, time-dependent recycle demand, time-dependent surplus value of recycled product and constructs models on recycle service strategy for the recyclable waste collector. A crucial factor in optimizing a recycle service strategy is consumer demand. The study considers the relationships between consumer demand towards recycle and product characteristics, surplus value and user behavior. The study proposes a recycle service strategy which differs significantly from the conventional and typical uniform service strategy. Periods with considerable demand and large surplus product value suggest frequent and short service cycle. The study explores how to determine a recycle service strategy for recyclable waste collector in terms of service cycle frequency and duration and vehicle type for all service cycles by considering surplus value of recycled product, time-dependent demand, transportation economies and demand-supply interaction. The recyclable waste collector is responsible for the collection of waste product for the manufacturer. The study also examines the impacts of utilization rate on the cost and profit in the context of different sizes of vehicles. The model applies mathematical programming methods and attempts to maximize the total profit of the distributor during the study period. This study applies the binary logit model, analytical model and mathematical programming methods to the problem. The model specifically explores how to determine a recycle service strategy for the recycler by considering product surplus value, time-dependent recycle demand, transportation economies and demand-supply interaction. The model applies mathematical programming methods and attempts to minimize the total logistics cost of the recycler and maximize the recycle benefits of the manufacturer during the study period. The study relaxes the constant demand assumption and examines how service strategy affects consumer demand towards waste recycling. Results of the study not only help understanding how the user demand for recycle service and product surplus value affects the logistics cost and manufacturer’s benefits, but also provide guidance such as award bonus and carbon emission regulations for the government.Keywords: circular economy, consumer demand, product surplus value, recycle service strategy
Procedia PDF Downloads 39412323 Towards Sustainable Consumption: A Framework for Assessing Supplier's Commitment
Authors: O. O. Oguntoye
Abstract:
Product consumption constitutes an important consideration for sustainable development. Seeing how product consumption could be highly unsustainable, coupled with how existing policies on corporate responsibility do not particularly address the consumption aspect of product lifecycle, conducting this research became necessary. The research makes an attempt to provide a framework by which to gauge corporate responsibility of product suppliers in terms of their commitment towards the sustainable consumption of their products. Through an exploration of relevant literature, independently established ideas with which to assess a given product supplier were galvanised into a four-criterion framework. The criteria are: (1) Embeddedness of consumption as a factor in corporate sustainability policy, (2) Level of understanding of consumption behaviour, (3) Breadth of behaviour-influencing strategies adopted, and (4) Inclusiveness for all main dimensions of sustainability. This resulting framework was then applied in a case study involving a UK-based furniture supplier where interviews and content analysis of corporate documents were used as the mode for primary data collection. From the case study, it was found that the supplier had performed to different levels across the four themes of the assessment. Two major areas for improvement were however identified – one is for the furniture supplier to focus more proactively on understanding consumption behaviour and, two is for it to widen the scope of its current strategies for enhancing sustainable consumption of supplied furniture. As a generalisation, the framework presented here makes it possible for companies to reflect with a sense of guidance, how they have demonstrated commitment towards sustainable consumption through their values, culture, and operations. It also provides a foundation for developing standardized assessment which the current widely used frameworks such as the GRI, the Global Compact, and others do not cover. While these popularly used frameworks mainly focus on sustainability of companies within the production and supply chain management contexts (i.e. mostly ‘upstream’), the framework here provides an extension by bringing the ‘downstream’ or consumer bit into light.Keywords: corporate sustainability, design for sustainable consumption, extended producer responsibility, sustainable consumer behaviour
Procedia PDF Downloads 42412322 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking
Authors: Noga Bregman
Abstract:
Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves
Procedia PDF Downloads 6312321 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem
Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee
Abstract:
Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research
Procedia PDF Downloads 34012320 Empowering Children through Co-creation: Writing a Book with and for Children about Their First Steps Towards Urban Independence
Authors: Beata Patuszynska
Abstract:
Children are largely absent from Polish social discourse, a fact which is mirrored in urban planning processes. Their absence creates a vicious circle – an unfriendly urban space discourages children from going outside on their own, meaning adults do not see a need to make spaces more friendly for a group, not present. The pandemic and lockdown, with their closed schools and temporary ban on unaccompanied minors on the streets, have only reinforced this. The project – co-writing with children a book concerning their first steps into urban independence - aims at empowering children, enabling them to find their voice when it comes to urban space. The foundation for the book was data collected during research and workshops with children from Warsaw primary schools, aged 7-10 - the age they begin independent travel in the city. The project was carried out with the participation and involvement of children at each creative step. Children were (1) models: the narrator is an 7-year-old boy getting ready for urban independence. He shares his experience as well as the experience of his school friends and his 10-year-old sister, who already travels on her own. Children were (2) teachers: the book is based on authentic children’s stories and experience, along with the author’s findings from research undertaken with children. The material was extended by observations and conclusions made during the pandemic. Children were (3) reviewers: a series of draft chapters from the book underwent review by children during workshops performed in a school. The process demonstrated that all children experience similar pleasures and worries when it comes to interaction with urban space. Furthermore, they also have similar needs that need satisfying. In my article, I will discuss; (1) the advantages of creating together with children; (2) my conclusions on how to work with children in participatory processes; (3) research results: perceptions of urban space by children age 7-10, when they begin their independent travel in the city; the barriers to and pleasures derived from independent urban travel; the influence of the pandemic on children’s feelings and their behaviour in urban spaces.Keywords: children, urban space, co-creation, participation, human rights
Procedia PDF Downloads 10612319 From Restraint to Obligation: The Protection of the Environment in Times of Armed Conflict
Authors: Aaron Walayat
Abstract:
Protection of the environment in international law has been one of the most developed in the context of international humanitarian law. This paper examines the history of the protection of the environment in times of armed conflict, beginning with the traditional notion of restraint observed in antiquity towards the obligation to protect the environment, examining the treaties and agreements, both binding and non-binding which have contributed to environmental protection in war. The paper begins with a discussion of the ancient concept of restraint. This section examines the social norms in favor of protection of the environment as observed in the Bible, Greco-Roman mythology, and even more contemporary literature. The study of the traditional rejection of total war establishes the social foundation on which the current legal regime has stemmed. The paper then studies the principle of restraint as codified in international humanitarian law. It mainly examines Additional Protocol I of the Geneva Convention of 1949 and existing international law concerning civilian objects and the principles of international humanitarian law in the classification between civilian objects and military objectives. The paper then explores the environment’s classification as both a military objective and as a civilian object as well as explores arguments in favor of the classification of the whole environment as a civilian object. The paper will then discuss the current legal regime surrounding the protection of the environment, discussing some declarations and conventions including the 1868 Declaration of St. Petersburg, the 1907 Hague Convention No. IV, the Geneva Conventions, and the 1976 Environmental Modification Convention. The paper concludes with the outline noting the movement from codification of the principles of restraint into the various treaties, agreements, and declarations of the current regime of international humanitarian law. This paper provides an analysis of the history and significance of the relationship between international humanitarian law as a major contributor to the growing field of international environmental law.Keywords: armed conflict, environment, legal regime, restraint
Procedia PDF Downloads 20912318 Experimental Determination of Aluminum 7075-T6 Parameters Using Stabilized Cycle Tests to Predict Thermal Ratcheting
Authors: Armin Rahmatfam, Mohammad Zehsaz, Farid Vakili Tahami, Nasser Ghassembaglou
Abstract:
In this paper the thermal ratcheting, kinematic hardening parameters C, γ, isotropic hardening parameters and also k, b, Q combined isotropic/kinematic hardening parameters have been obtained experimentally from the monotonic, strain controlled cyclic tests at room and elevated temperatures of 20°C, 100°C, and 400°C. These parameters are used in nonlinear combined isotropic/kinematic hardening model to predict better description of the loading and reloading cycles in the cyclic indentation as well as thermal ratcheting. For this purpose, three groups of specimens made of Aluminum 7075-T6 have been investigated. After each test and using stable hysteretic cycles, material parameters have been obtained for using in combined nonlinear isotropic/kinematic hardening models. Also the methodology of obtaining the correct kinematic/isotropic hardening parameters is presented.Keywords: combined hardening model, kinematic hardening, isotropic hardening, cyclic tests
Procedia PDF Downloads 48412317 A Parallel Poromechanics Finite Element Method (FEM) Model for Reservoir Analyses
Authors: Henrique C. C. Andrade, Ana Beatriz C. G. Silva, Fernando Luiz B. Ribeiro, Samir Maghous, Jose Claudio F. Telles, Eduardo M. R. Fairbairn
Abstract:
The present paper aims at developing a parallel computational model for numerical simulation of poromechanics analyses of heterogeneous reservoirs. In the context of macroscopic poroelastoplasticity, the hydromechanical coupling between the skeleton deformation and the fluid pressure is addressed by means of two constitutive equations. The first state equation relates the stress to skeleton strain and pore pressure, while the second state equation relates the Lagrangian porosity change to skeleton volume strain and pore pressure. A specific algorithm for local plastic integration using a tangent operator is devised. A modified Cam-clay type yield surface with associated plastic flow rule is adopted to account for both contractive and dilative behavior.Keywords: finite element method, poromechanics, poroplasticity, reservoir analysis
Procedia PDF Downloads 39512316 Nonparametric Estimation of Risk-Neutral Densities via Empirical Esscher Transform
Authors: Manoel Pereira, Alvaro Veiga, Camila Epprecht, Renato Costa
Abstract:
This paper introduces an empirical version of the Esscher transform for risk-neutral option pricing. Traditional parametric methods require the formulation of an explicit risk-neutral model and are operational only for a few probability distributions for the returns of the underlying. In our proposal, we make only mild assumptions on the pricing kernel and there is no need for the formulation of the risk-neutral model for the returns. First, we simulate sample paths for the returns under the physical distribution. Then, based on the empirical Esscher transform, the sample is reweighted, giving rise to a risk-neutralized sample from which derivative prices can be obtained by a weighted sum of the options pay-offs in each path. We compare our proposal with some traditional parametric pricing methods in four experiments with artificial and real data.Keywords: esscher transform, generalized autoregressive Conditional Heteroscedastic (GARCH), nonparametric option pricing
Procedia PDF Downloads 49312315 Stock Prediction and Portfolio Optimization Thesis
Authors: Deniz Peksen
Abstract:
This thesis aims to predict trend movement of closing price of stock and to maximize portfolio by utilizing the predictions. In this context, the study aims to define a stock portfolio strategy from models created by using Logistic Regression, Gradient Boosting and Random Forest. Recently, predicting the trend of stock price has gained a significance role in making buy and sell decisions and generating returns with investment strategies formed by machine learning basis decisions. There are plenty of studies in the literature on the prediction of stock prices in capital markets using machine learning methods but most of them focus on closing prices instead of the direction of price trend. Our study differs from literature in terms of target definition. Ours is a classification problem which is focusing on the market trend in next 20 trading days. To predict trend direction, fourteen years of data were used for training. Following three years were used for validation. Finally, last three years were used for testing. Training data are between 2002-06-18 and 2016-12-30 Validation data are between 2017-01-02 and 2019-12-31 Testing data are between 2020-01-02 and 2022-03-17 We determine Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate as benchmarks which we should outperform. We compared our machine learning basis portfolio return on test data with return of Hold Stock Portfolio, Best Stock Portfolio and USD-TRY Exchange rate. We assessed our model performance with the help of roc-auc score and lift charts. We use logistic regression, Gradient Boosting and Random Forest with grid search approach to fine-tune hyper-parameters. As a result of the empirical study, the existence of uptrend and downtrend of five stocks could not be predicted by the models. When we use these predictions to define buy and sell decisions in order to generate model-based-portfolio, model-based-portfolio fails in test dataset. It was found that Model-based buy and sell decisions generated a stock portfolio strategy whose returns can not outperform non-model portfolio strategies on test dataset. We found that any effort for predicting the trend which is formulated on stock price is a challenge. We found same results as Random Walk Theory claims which says that stock price or price changes are unpredictable. Our model iterations failed on test dataset. Although, we built up several good models on validation dataset, we failed on test dataset. We implemented Random Forest, Gradient Boosting and Logistic Regression. We discovered that complex models did not provide advantage or additional performance while comparing them with Logistic Regression. More complexity did not lead us to reach better performance. Using a complex model is not an answer to figure out the stock-related prediction problem. Our approach was to predict the trend instead of the price. This approach converted our problem into classification. However, this label approach does not lead us to solve the stock prediction problem and deny or refute the accuracy of the Random Walk Theory for the stock price.Keywords: stock prediction, portfolio optimization, data science, machine learning
Procedia PDF Downloads 8612314 Evaluation of Pragmatic Information in an English Textbook: Focus on Requests
Authors: Israa A. Qari
Abstract:
Learning to request in a foreign language is a key ability within pragmatics language teaching. This paper examines how requests are taught in English Unlimited Book 3 (Cambridge University Press), an EFL textbook series employed by King Abdulaziz University in Jeddah, Saudi Arabia to teach advanced foundation year students English. The focus of analysis is the evaluation of the request linguistic strategies present in the textbook, frequency of the use of these strategies, and the contextual information provided on the use of these linguistic forms. The researcher collected all the linguistic forms which consisted of the request speech act and divided them into levels employing the CCSARP request coding manual. Findings demonstrated that simple and commonly employed request strategies are introduced. Looking closely at the exercises throughout the chapters, it was noticeable that the book exclusively employed the most direct form of requesting (the imperative) when giving learners instructions: e.g. listen, write, ask, answer, read, look, complete, choose, talk, think, etc. The book also made use of some other request strategies such as ‘hedged performatives’ and ‘query preparatory’. However, it was also found that many strategies were not dealt with in the book, specifically strategies with combined functions (e.g. possibility, ability). On a sociopragmatic level, a strong focus was found to exist on standard situations in which relations between the requester and requestee are clear. In general, contextual information was communicated implicitly only. The textbook did not seem to differentiate between formal and informal request contexts (register) which might consequently impel students to overgeneralize. The paper closes with some recommendations for textbook and curriculum designers. Findings are also contrasted with previous results from similar body of research on EFL requests.Keywords: EFL, requests, saudi, speech acts, textbook evaluation
Procedia PDF Downloads 140