Search results for: diurnal temperature cycle model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23217

Search results for: diurnal temperature cycle model

12777 Numerical Study of Laminar Separation Bubble Over an Airfoil Using γ-ReθT SST Turbulence Model on Moderate Reynolds Number

Authors: Younes El Khchine, Mohammed Sriti

Abstract:

A parametric study has been conducted to analyse the flow around S809 airfoil of wind turbine in order to better understand the characteristics and effects of laminar separation bubble (LSB) on aerodynamic design for maximizing wind turbine efficiency. Numerical simulations were performed at low Reynolds number by solving the Unsteady Reynolds Averaged Navier-Stokes (URANS) equations based on C-type structural mesh and using γ-Reθt turbulence model. Two-dimensional study was conducted for the chord Reynolds number of 1×105 and angles of attack (AoA) between 0 and 20.15 degrees. The simulation results obtained for the aerodynamic coefficients at various angles of attack (AoA) were compared with XFoil results. A sensitivity study was performed to examine the effects of Reynolds number and free-stream turbulence intensity on the location and length of laminar separation bubble and aerodynamic performances of wind turbine. The results show that increasing the Reynolds number leads to a delay in the laminar separation on the upper surface of the airfoil. The increase in Reynolds number leads to an accelerate transition process and the turbulent reattachment point move closer to the leading edge owing to an earlier reattachment of the turbulent shear layer. This leads to a considerable reduction in the length of the separation bubble as the Reynolds number is increased. The increase of the level of free-stream turbulence intensity leads to a decrease in separation bubble length and an increase the lift coefficient while having negligible effects on the stall angle. When the AoA increased, the bubble on the suction airfoil surface was found to moves upstream to leading edge of the airfoil that causes earlier laminar separation.

Keywords: laminar separation bubble, turbulence intensity, S809 airfoil, transition model, Reynolds number

Procedia PDF Downloads 62
12776 The Impact of Organizational Justice on Organizational Loyalty Considering the Role of Spirituality and Organizational Trust Variable: Case Study of South Pars Gas Complex

Authors: Sima Radmanesh, Nahid Radmanesh, Mohsen Yaghmoor

Abstract:

The presence of large number of active rival gas companies on Persian Gulf border necessitates the adaptation and implementation of effective employee retention strategies as well as implementation of promoting loyalty and belonging strategies of specialized staffs in the South Pars gas company. Hence, this study aims at assessing the amount of organizational loyalty and explaining the effect of institutional justice on organizational justice with regard to the role of mediator variables of spirituality in the work place and organizational trust. Therefore, through reviewing the related literature, the researchers achieve a conceptual model for the effect of these factors on organizational loyalty. To this end, this model was assessed and tested through questionnaires in South Pars gas company. The research method was descriptive and correlation-structural equation modeling. The findings of the study indicated a significant relationship between the concepts addressed in the research and conceptual models were confirmed. Finally, according to the results to improve effectiveness factors affecting organizational loyalty, recommendations are provided.

Keywords: organizational loyalty, organizational trust, organizational justice, organizational spirit, oil and gas company

Procedia PDF Downloads 443
12775 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network

Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza

Abstract:

The aim of the present work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. Based on feature selection in different phases, in this research, we design a neural network system that has optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each ROI, 6 distinct set of texture features are extracted such as first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. We show that with the injection of liquid and the analysis of more phases the high relevant features in each region changed. Our results show that for detecting HCC tumor phase3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between these two classes according to our method, relates to first order histogram parameters with the accuracy of 85% in phase 1, 95% phase 2, and 95% in phase 3.

Keywords: multi-phasic liver images, texture analysis, neural network, hidden layer

Procedia PDF Downloads 250
12774 The School Based Support Program: An Evaluation of a Comprehensive School Reform Initiative in the State of Qatar

Authors: Abdullah Abu-Tineh, Youmen Chaaban

Abstract:

This study examines the development of a professional development (PD) model for teacher growth and learning that is embedded into the school context. The School based Support Program (SBSP), designed for the Qatari context, targets the practices, knowledge and skills of both school leadership and teachers in an attempt to improve student learning outcomes. Key aspects of the model include the development of learning communities among teachers, strong leadership that supports school improvement activities, and the use of research-based PD to improve teacher practices and student achievement. This paper further presents findings from an evaluation of this PD program. Based on an adaptation of Guskey’s evaluation of PD models, 100 teachers at the participating schools were selected for classroom observations and 40 took part in in-depth interviews to examine changed classroom practices. The impact of the PD program on student learning was also examined. Teachers’ practices and their students’ achievement in English, Arabic, mathematics and science were measured at the beginning and at the end of the intervention.

Keywords: initiative, professional development, school based support Program (SBSP), school reform

Procedia PDF Downloads 475
12773 The Pedagogical Integration of Digital Technologies in Initial Teacher Training

Authors: Vânia Graça, Paula Quadros-Flores, Altina Ramos

Abstract:

The use of Digital Technologies in teaching and learning processes is currently a reality, namely in initial teacher training. This study aims at knowing the digital reality of students in initial teacher training in order to improve training in the educational use of ICT and to promote digital technology integration strategies in an educational context. It is part of the IFITIC Project "Innovate with ICT in Initial Teacher Training to Promote Methodological Renewal in Pre-school Education and in the 1st and 2nd Basic Education Cycle" which involves the School of Education, Polytechnic of Porto and Institute of Education, University of Minho. The Project aims at rethinking educational practice with ICT in the initial training of future teachers in order to promote methodological innovation in Pre-school Education and in the 1st and 2nd Cycles of Basic Education. A qualitative methodology was used, in which a questionnaire survey was applied to teachers in initial training. For data analysis, the techniques of content analysis with the support of NVivo software were used. The results point to the following aspects: a) future teachers recognize that they have more technical knowledge about ICT than pedagogical knowledge. This result makes sense if we consider the objective of Basic Education, so that the gaps can be filled in the Master's Course by students who wish to follow the teaching; b) the respondents are aware that the integration of digital resources contributes positively to students' learning and to the life of children and young people, which also promotes preparation in life; c) to be a teacher in the digital age there is a need for the development of digital literacy, lifelong learning and the adoption of new ways of teaching how to learn. Thus, this study aims to contribute to a reflection on the teaching profession in the digital age.

Keywords: digital technologies, initial teacher training, pedagogical use of ICT, skills

Procedia PDF Downloads 111
12772 An Internet of Things-Based Weight Monitoring System for Honey

Authors: Zheng-Yan Ruan, Chien-Hao Wang, Hong-Jen Lin, Chien-Peng Huang, Ying-Hao Chen, En-Cheng Yang, Chwan-Lu Tseng, Joe-Air Jiang

Abstract:

Bees play a vital role in pollination. This paper focuses on the weighing process of honey. Honey is usually stored at the comb in a hive. Bee farmers brush bees away from the comb and then collect honey, and the collected honey is weighed afterward. However, such a process brings strong negative influences on bees and even leads to the death of bees. This paper therefore presents an Internet of Things-based weight monitoring system which uses weight sensors to measure the weight of honey and simplifies the whole weighing procedure. To verify the system, the weight measured by the system is compared to the weight of standard weights used for calibration by employing a linear regression model. The R2 of the regression model is 0.9788, which suggests that the weighing system is highly reliable and is able to be applied to obtain actual weight of honey. In the future, the weight data of honey can be used to find the relationship between honey production and different ecological parameters, such as bees’ foraging behavior and weather conditions. It is expected that the findings can serve as critical information for honey production improvement.

Keywords: internet of things, weight, honey, bee

Procedia PDF Downloads 443
12771 Comparative Evaluation of Kinetic Model of Chromium and Lead Uptake from Aqueous Solution by Activated Balanitesaegyptiaca Seeds

Authors: Mohammed Umar Manko

Abstract:

A series of batch experiments were conducted in order to investigate the feasibility of Balanitesaegyptiaca seeds based activated carbon as compared with industrial activated carbon for the removal of chromium and lead ions from aqueous solution by the adsorption process within 30 to 150 minutes contact time. The activated samples were prepared using zinc chloride and tetraoxophophate(VI) acid. The results obtained showed that the activated carbon of Balanitesaegyptiaca seeds studied had relatively high adsorption capacities for these heavy metal ions compared with industrial Activated Carbon. The percentage removal of Cr (VI) and lead (II) ions by the three activated carbon samples were 64%, 70% and 71%; 60%, 66% and 60% respectively. Adsorption equilibrium was established in 90 minutes for the heavy metal ions. The equilibrium data fitted the pseudo second order out of the pseudo first, pseudo second, Elovich ,Natarajan and Khalaf models tested. The investigation also showed that the adsorbents can effectively remove metal ions from similar wastewater and aqueous media.

Keywords: activated carbon, pseudo second order, chromium, lead, Elovich model

Procedia PDF Downloads 308
12770 Emotion Oriented Students' Opinioned Topic Detection for Course Reviews in Massive Open Online Course

Authors: Zhi Liu, Xian Peng, Monika Domanska, Lingyun Kang, Sannyuya Liu

Abstract:

Massive Open education has become increasingly popular among worldwide learners. An increasing number of course reviews are being generated in Massive Open Online Course (MOOC) platform, which offers an interactive feedback channel for learners to express opinions and feelings in learning. These reviews typically contain subjective emotion and topic information towards the courses. However, it is time-consuming to artificially detect these opinions. In this paper, we propose an emotion-oriented topic detection model to automatically detect the students’ opinioned aspects in course reviews. The known overall emotion orientation and emotional words in each review are used to guide the joint probabilistic modeling of emotion and aspects in reviews. Through the experiment on real-life review data, it is verified that the distribution of course-emotion-aspect can be calculated to capture the most significant opinioned topics in each course unit. This proposed technique helps in conducting intelligent learning analytics for teachers to improve pedagogies and for developers to promote user experiences.

Keywords: Massive Open Online Course (MOOC), course reviews, topic model, emotion recognition, topical aspects

Procedia PDF Downloads 251
12769 Synthesis of LiMₓMn₂₋ₓO₄ Doped Co, Ni, Cr and Its Characterization as Lithium Battery Cathode

Authors: Dyah Purwaningsih, Roto Roto, Hari Sutrisno

Abstract:

Manganese dioxide (MnO₂) and its derivatives are among the most widely used materials for the positive electrode in both primary and rechargeable lithium batteries. The MnO₂ derivative compound of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is one of the leading candidates for positive electrode materials in lithium batteries as it is abundant, low cost and environmentally friendly. Over the years, synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) has been carried out using various methods including sol-gel, gas condensation, spray pyrolysis, and ceramics. Problems with these various methods persist including high cost (so commercially inapplicable) and must be done at high temperature (environmentally unfriendly). This research aims to: (1) synthesize LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) by reflux technique; (2) develop microstructure analysis method from XRD Powder LiMₓMn₂₋ₓO₄ data with the two-stage method; (3) study the electrical conductivity of LiMₓMn₂₋ₓO₄. This research developed the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) with reflux. The materials consisting of Mn(CH₃COOH)₂. 4H₂O and Na₂S₂O₈ were refluxed for 10 hours at 120°C to form β-MnO₂. The doping of Co, Ni and Cr were carried out using solid-state method with LiOH to form LiMₓMn₂₋ₓO₄. The instruments used included XRD, SEM-EDX, XPS, TEM, SAA, TG/DTA, FTIR, LCR meter and eight-channel battery analyzer. Microstructure analysis of LiMₓMn₂₋ₓO₄ was carried out on XRD powder data by two-stage method using FullProf program integrated into WinPlotR and Oscail Program as well as on binding energy data from XPS. The morphology of LiMₓMn₂₋ₓO₄ was studied with SEM-EDX, TEM, and SAA. The thermal stability test was performed with TG/DTA, the electrical conductivity was studied from the LCR meter data. The specific capacity of LiMₓMn₂₋ₓO₄ as lithium battery cathode was tested using an eight-channel battery analyzer. The results showed that the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) was successfully carried out by reflux. The optimal temperature of calcination is 750°C. XRD characterization shows that LiMn₂O₄ has a cubic crystal structure with Fd3m space group. By using the CheckCell in the WinPlotr, the increase of Li/Mn mole ratio does not result in changes in the LiMn₂O₄ crystal structure. The doping of Co, Ni and Cr on LiMₓMn₂₋ₓO₄ (x = 0.02; 0.04; 0; 0.6; 0.08; 0.10) does not change the cubic crystal structure of Fd3m. All the formed crystals are polycrystals with the size of 100-450 nm. Characterization of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) microstructure by two-stage method shows the shrinkage of lattice parameter and cell volume. Based on its range of capacitance, the conductivity obtained at LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is an ionic conductivity with varying capacitance. The specific battery capacity at a voltage of 4799.7 mV for LiMn₂O₄; Li₁.₀₈Mn₁.₉₂O₄; LiCo₀.₁Mn₁.₉O₄; LiNi₀.₁Mn₁.₉O₄ and LiCr₀.₁Mn₁.₉O₄ are 88.62 mAh/g; 2.73 mAh/g; 89.39 mAh/g; 85.15 mAh/g; and 1.48 mAh/g respectively.

Keywords: LiMₓMn₂₋ₓO₄, solid-state, reflux, two-stage method, ionic conductivity, specific capacity

Procedia PDF Downloads 182
12768 The Analysis of a Reactive Hydromagnetic Internal Heat Generating Poiseuille Fluid Flow through a Channel

Authors: Anthony R. Hassan, Jacob A. Gbadeyan

Abstract:

In this paper, the analysis of a reactive hydromagnetic Poiseuille fluid flow under each of sensitized, Arrhenius and bimolecular chemical kinetics through a channel in the presence of heat source is carried out. An exothermic reaction is assumed while the concentration of the material is neglected. Adomian Decomposition Method (ADM) together with Pade Approximation is used to obtain the solutions of the governing nonlinear non – dimensional differential equations. Effects of various physical parameters on the velocity and temperature fields of the fluid flow are investigated. The entropy generation analysis and the conditions for thermal criticality are also presented.

Keywords: chemical kinetics, entropy generation, thermal criticality, adomian decomposition method (ADM) and pade approximation

Procedia PDF Downloads 448
12767 Investigations in Machining of Hot Work Tool Steel with Mixed Ceramic Tool

Authors: B. Varaprasad, C. Srinivasa Rao

Abstract:

Hard turning has been explored as an alternative to the conventional one used for manufacture of Parts using tool steels. In the present study, the effects of cutting speed, feed rate and Depth of Cut (DOC) on cutting forces, specific cutting force, power and surface roughness in the hard turning are experimentally investigated. Experiments are carried out using mixed ceramic(Al2O3+TiC) cutting tool of corner radius 0.8mm, in turning operations on AISI H13 tool steel, heat treated to a hardness of 62 HRC. Based on Design of Experiments (DOE), a total of 20 tests are carried out. The range of each one of the three parameters is set at three different levels, viz, low, medium and high. The validity of the model is checked by Analysis of variance (ANOVA). Predicted models are derived from regression analysis. Comparison of experimental and predicted values of specific cutting force, power and surface roughness shows that good agreement has been achieved between them. Therefore, the developed model may be recommended to be used for predicting specific cutting force, power and surface roughness in hard turning of tool steel that is AISI H13 steel.

Keywords: hard turning, specific cutting force, power, surface roughness, AISI H13, mixed ceramic

Procedia PDF Downloads 690
12766 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 215
12765 The Location of Park and Ride Facilities Using the Fuzzy Inference Model

Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas

Abstract:

Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.

Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location

Procedia PDF Downloads 316
12764 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities

Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun

Abstract:

As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.

Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning

Procedia PDF Downloads 39
12763 Effects of Pre-Task Activities on the Writing Performance of Second Language Learners

Authors: Wajiha Fatima

Abstract:

Based on Rod Ellis’s (2002) the methodology of task-based teaching, this study explored the effects of pre-task activities on the Job Application letter of 102 ESL students (who were female and undergraduate learners). For this purpose, students were divided among three groups (Group A, Group B, and Group C), kept in control and experimental settings as well. Pre-task phase motivates the learners to perform the actual task. Ellis reportedly discussed four pre-task phases: (1) performing a similar task; (2) providing a model; (3) non-task preparation activities and (4) strategic planning. They were taught through above given three pre-task activities. Accordingly, the learners in control setting were supposed to write without any teaching aid while learners in an experimental situation were provided three different pre-task activities in each group. In order to compare the scores of the pre-test and post-test of the three groups, sample paired t-test was utilized. The obtained results of the written job application by the female students revealed that pre-task activities improved their performance in writing. On the other hand, the comparison of the three pre-task activities revealed that 'providing a model' outperformed the other two activities. For this purpose, ANOVA was utilized.

Keywords: pre-task activities, second language learners, task based language teaching, writing

Procedia PDF Downloads 161
12762 The Effects of Stoke's Drag, Electrostatic Force and Charge on Penetration of Nanoparticles through N95 Respirators

Authors: Jacob Schwartz, Maxim Durach, Aniruddha Mitra, Abbas Rashidi, Glen Sage, Atin Adhikari

Abstract:

NIOSH (National Institute for Occupational Safety and Health) approved N95 respirators are commonly used by workers in construction sites where there is a large amount of dust being produced from sawing, grinding, blasting, welding, etc., both electrostatically charged and not. A significant portion of airborne particles in construction sites could be nanoparticles created beside coarse particles. The penetration of the particles through the masks may differ depending on the size and charge of the individual particle. In field experiments relevant to this current study, we found that nanoparticles of medium size ranges are penetrating more frequently than nanoparticles of smaller and larger sizes. For example, penetration percentages of nanoparticles of 11.5 – 27.4 nm into a sealed N95 respirator on a manikin head ranged from 0.59 to 6.59%, whereas nanoparticles of 36.5 – 86.6 nm ranged from 7.34 to 16.04%. The possible causes behind this increased penetration of mid-size nanoparticles through mask filters are not yet explored. The objective of this study is to identify causes behind this unusual behavior of mid-size nanoparticles. We have considered such physical factors as Boltzmann distribution of the particles in thermal equilibrium with the air, kinetic energy of the particles at impact on the mask, Stoke’s drag force, and electrostatic forces in the mask stopping the particles. When the particles collide with the mask, only the particles that have enough kinetic energy to overcome the energy loss due to the electrostatic forces and the Stokes’ drag in the mask can pass through the mask. To understand this process, the following assumptions were made: (1) the effect of Stoke’s drag depends on the particles’ velocity at entry into the mask; (2) the electrostatic force is proportional to the charge on the particles, which in turn is proportional to the surface area of the particles; (3) the general dependence on electrostatic charge and thickness means that for stronger electrostatic resistance in the masks and thicker the masks’ fiber layers the penetration of particles is reduced, which is a sensible conclusion. In sampling situations where one mask was soaked in alcohol eliminating electrostatic interaction the penetration was much larger in the mid-range than the same mask with electrostatic interaction. The smaller nanoparticles showed almost zero penetration most likely because of the small kinetic energy, while the larger sized nanoparticles showed almost negligible penetration most likely due to the interaction of the particle with its own drag force. If there is no electrostatic force the fraction for larger particles grows. But if the electrostatic force is added the fraction for larger particles goes down, so diminished penetration for larger particles should be due to increased electrostatic repulsion, may be due to increased surface area and therefore larger charge on average. We have also explored the effect of ambient temperature on nanoparticle penetrations and determined that the dependence of the penetration of particles on the temperature is weak in the range of temperatures in the measurements 37-42°C, since the factor changes in the range from 3.17 10-3K-1 to 3.22 10-3K-1.

Keywords: respiratory protection, industrial hygiene, aerosol, electrostatic force

Procedia PDF Downloads 183
12761 A Hybrid Model of Goal, Integer and Constraint Programming for Single Machine Scheduling Problem with Sequence Dependent Setup Times: A Case Study in Aerospace Industry

Authors: Didem Can

Abstract:

Scheduling problems are one of the most fundamental issues of production systems. Many different approaches and models have been developed according to the production processes of the parts and the main purpose of the problem. In this study, one of the bottleneck stations of a company serving in the aerospace industry is analyzed and considered as a single machine scheduling problem with sequence-dependent setup times. The objective of the problem is assigning a large number of similar parts to the same shift -to reduce chemical waste- while minimizing the number of tardy jobs. The goal programming method will be used to achieve two different objectives simultaneously. The assignment of parts to the shift will be expressed using the integer programming method. Finally, the constraint programming method will be used as it provides a way to find a result in a short time by avoiding worse resulting feasible solutions with the defined variables set. The model to be established will be tested and evaluated with real data in the application part.

Keywords: constraint programming, goal programming, integer programming, sequence-dependent setup, single machine scheduling

Procedia PDF Downloads 217
12760 A Literature Review on Development of a Forecast Supported Approach for the Continuous Pre-Planning of Required Transport Capacity for the Design of Sustainable Transport Chains

Authors: Georg Brunnthaller, Sandra Stein, Wilfried Sihn

Abstract:

Logistics service providers are facing increasing volatility concerning future transport demand. Short-term planning horizons and planning uncertainties lead to reduced capacity utilisation and increasing empty mileage. To overcome these challenges, a model is proposed to continuously pre-plan future transport capacity in order to redesign and adjust the intermodal fleet accordingly. It is expected that the model will enable logistics service providers to organise more economically and ecologically sustainable transport chains in a more flexible way. To further describe such planning aspects, this paper gives a structured literature review on transport planning problems. The focus is on strategic and tactical planning levels, comprising relevant fleet-sizing-, network-design- and choice-of-carriers-problems. Models and their developed solution techniques are presented and the literature review is concluded with an outlook to our future research objectives

Keywords: choice of transport mode, fleet-sizing, freight transport planning, multimodal, review, service network design

Procedia PDF Downloads 352
12759 The Composition of Biooil during Biomass Pyrolysis at Various Temperatures

Authors: Zoltan Sebestyen, Eszter Barta-Rajnai, Emma Jakab, Zsuzsanna Czegeny

Abstract:

Extraction of the energy content of lignocellulosic biomass is one of the possible pathways to reduce the greenhouse gas emission derived from the burning of the fossil fuels. The application of the bioenergy can mitigate the energy dependency of a country from the foreign natural gas and the petroleum. The diversity of the plant materials makes difficult the utilization of the raw biomass in power plants. This problem can be overcome by the application of thermochemical techniques. Pyrolysis is the thermal decomposition of the raw materials under inert atmosphere at high temperatures, which produces pyrolysis gas, biooil and charcoal. The energy content of these products can be exploited by further utilization. The differences in the chemical and physical properties of the raw biomass materials can be reduced by the use of torrefaction. Torrefaction is a promising mild thermal pretreatment method performed at temperatures between 200 and 300 °C in an inert atmosphere. The goal of the pretreatment from a chemical point of view is the removal of water and the acidic groups of hemicelluloses or the whole hemicellulose fraction with minor degradation of cellulose and lignin in the biomass. Thus, the stability of biomass against biodegradation increases, while its energy density increases. The volume of the raw materials decreases so the expenses of the transportation and the storage are reduced as well. Biooil is the major product during pyrolysis and an important by-product during torrefaction of biomass. The composition of biooil mostly depends on the quality of the raw materials and the applied temperature. In this work, thermoanalytical techniques have been used to study the qualitative and quantitative composition of the pyrolysis and torrefaction oils of a woody (black locust) and two herbaceous samples (rape straw and wheat straw). The biooil contains C5 and C6 anhydrosugar molecules, as well as aromatic compounds originating from hemicellulose, cellulose, and lignin, respectively. In this study, special emphasis was placed on the formation of the lignin monomeric products. The structure of the lignin fraction is different in the wood and in the herbaceous plants. According to the thermoanalytical studies the decomposition of lignin starts above 200 °C and ends at about 500 °C. The lignin monomers are present among the components of the torrefaction oil even at relatively low temperatures. We established that the concentration and the composition of the lignin products vary significantly with the applied temperature indicating that different decomposition mechanisms dominate at low and high temperatures. The evolutions of decomposition products as well as the thermal stability of the samples were measured by thermogravimetry/mass spectrometry (TG/MS). The differences in the structure of the lignin products of woody and herbaceous samples were characterized by the method of pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS). As a statistical method, principal component analysis (PCA) has been used to find correlation between the composition of lignin products of the biooil and the applied temperatures.

Keywords: pyrolysis, torrefaction, biooil, lignin

Procedia PDF Downloads 304
12758 A Hebbian Neural Network Model of the Stroop Effect

Authors: Vadim Kulikov

Abstract:

The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible.

Keywords: connectionism, Hebbian learning, artificial neural networks, philosophy of mind, Stroop

Procedia PDF Downloads 252
12757 Studies on Design of Cyclone Separator with Tri-Chambered Filter Unit for Dust Removal in Rice Mills

Authors: T. K. Chandrashekar, R. Harish Kumar, T. B. Prasad, C. R. Rajashekhar

Abstract:

Cyclone separators are normally used for dust collection in rice mills for a long time. However, their dust collection efficiency is lower and is influenced by factors like geometry, exit pipe dimensions and length, humidity, and temperature at dust generation place. The design of cyclone has been slightly altered, and the new design has proven to be successful in collecting the dust particles of size up to 10 microns, the major modification was to change the height of exit pipe of the cyclone chamber to have optimum dust collection. The cyclone is coupled with a tri-chambered filter unit with three geo text materials filters of different mesh size to capture the dust less than 10 micron.

Keywords: cyclone-separator, rice mill, tri chambered filter, dust removal

Procedia PDF Downloads 499
12756 Radiation Effects in the PVDF/Graphene Oxide Nanocomposites

Authors: Juliana V. Pereira, Adriana S. M. Batista, Jefferson P. Nascimento, Clascídia A. Furtado, Luiz O. Faria

Abstract:

Exposure to ionizing radiation has been found to induce changes in poly(vinylidene fluoride) (PVDF) homopolymers. The high dose gamma irradiation process induces the formation of C=C and C=O bonds in its [CH2-CF2]n main chain. The irradiation also provokes crosslinking and chain scission. All these radio-induced defects lead to changes in the PVDF crystalline structure. As a consequence, it is common to observe a decrease in the melting temperature (TM) and melting latent heat (LM) and some changes in its ferroelectric features. We have investigated the possibility of preparing nanocomposites of PVDF with graphene oxide (GO) through the radio-induction of molecular bonds. In this work, we discuss how the gamma radiation interacts with the nanocomposite crystalline structure.

Keywords: gamma irradiation, graphene oxide, nanocomposites, PVDF

Procedia PDF Downloads 268
12755 Teachers Engagement to Teaching: Exploring Australian Teachers’ Attribute Constructs of Resilience, Adaptability, Commitment, Self/Collective Efficacy Beliefs

Authors: Lynn Sheridan, Dennis Alonzo, Hoa Nguyen, Andy Gao, Tracy Durksen

Abstract:

Disruptions to teaching (e.g., COVID-related) have increased work demands for teachers. There is an opportunity for research to explore evidence-informed steps to support teachers. Collective evidence informs data on teachers’ personal attributes (e.g., self-efficacy beliefs) in the workplace are seen to promote success in teaching and support teacher engagement. Teacher engagement plays a role in students’ learning and teachers’ effectiveness. Engaged teachers are better at overcoming work-related stress, burnout and are more likely to take on active roles. Teachers’ commitment is influenced by a host of personal (e.g., teacher well-being) and environmental factors (e.g., job stresses). The job demands-resources model provided a conceptual basis for examining how teachers’ well-being, and is influenced by job demands and job resources. Job demands potentially evoke strain and exceed the employee’s capability to adapt. Job resources entail what the job offers to individual teachers (e.g., organisational support), helping to reduce job demands. The application of the job demands-resources model involves gathering an evidence-base of and connection to personal attributes (job resources). The study explored the association between constructs (resilience, adaptability, commitment, self/collective efficacy) and a teacher’s engagement with the job. The paper sought to elaborate on the model and determine the associations between key constructs of well-being (resilience, adaptability), commitment, and motivation (self and collective-efficacy beliefs) to teachers’ engagement in teaching. Data collection involved online a multi-dimensional instrument using validated items distributed from 2020-2022. The instrument was designed to identify construct relationships. The participant number was 170. Data Analysis: The reliability coefficients, means, standard deviations, skewness, and kurtosis statistics for the six variables were completed. All scales have good reliability coefficients (.72-.96). A confirmatory factor analysis (CFA) and structural equation model (SEM) were performed to provide measurement support and to obtain latent correlations among factors. The final analysis was performed using structural equation modelling. Several fit indices were used to evaluate the model fit, including chi-square statistics and root mean square error of approximation. The CFA and SEM analysis was performed. The correlations of constructs indicated positive correlations exist, with the highest found between teacher engagement and resilience (r=.80) and the lowest between teacher adaptability and collective teacher efficacy (r=.22). Given the associations; we proceeded with CFA. The CFA yielded adequate fit: CFA fit: X (270, 1019) = 1836.79, p < .001, RMSEA = .04, and CFI = .94, TLI = .93 and SRMR = .04. All values were within the threshold values, indicating a good model fit. Results indicate that increasing teacher self-efficacy beliefs will increase a teacher’s level of engagement; that teacher ‘adaptability and resilience are positively associated with self-efficacy beliefs, as are collective teacher efficacy beliefs. Implications for school leaders and school systems: 1. investing in increasing teachers’ sense of efficacy beliefs to manage work demands; 2. leadership approaches can enhance teachers' adaptability and resilience; and 3. a culture of collective efficacy support. Preparing teachers for now and in the future offers an important reminder to policymakers and school leaders on the importance of supporting teachers’ personal attributes when faced with the challenging demands of the job.

Keywords: collective teacher efficacy, teacher self-efficacy, job demands, teacher engagement

Procedia PDF Downloads 75
12754 Simulation of Forest Fire Using Wireless Sensor Network

Authors: Mohammad F. Fauzi, Nurul H. Shahba M. Shahrun, Nurul W. Hamzah, Mohd Noah A. Rahman, Afzaal H. Seyal

Abstract:

In this paper, we proposed a simulation system using Wireless Sensor Network (WSN) that will be distributed around the forest for early forest fire detection and to locate the areas affected. In Brunei Darussalam, approximately 78% of the nation is covered by forest. Since the forest is Brunei’s most precious natural assets, it is very important to protect and conserve our forest. The hot climate in Brunei Darussalam can lead to forest fires which can be a fatal threat to the preservation of our forest. The process consists of getting data from the sensors, analyzing the data and producing an alert. The key factors that we are going to analyze are the surrounding temperature, wind speed and wind direction, humidity of the air and soil.

Keywords: forest fire monitor, humidity, wind direction, wireless sensor network

Procedia PDF Downloads 438
12753 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques

Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu

Abstract:

Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.

Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare

Procedia PDF Downloads 48
12752 Engagement Analysis Using DAiSEE Dataset

Authors: Naman Solanki, Souraj Mondal

Abstract:

With the world moving towards online communication, the video datastore has exploded in the past few years. Consequently, it has become crucial to analyse participant’s engagement levels in online communication videos. Engagement prediction of people in videos can be useful in many domains, like education, client meetings, dating, etc. Video-level or frame-level prediction of engagement for a user involves the development of robust models that can capture facial micro-emotions efficiently. For the development of an engagement prediction model, it is necessary to have a widely-accepted standard dataset for engagement analysis. DAiSEE is one of the datasets which consist of in-the-wild data and has a gold standard annotation for engagement prediction. Earlier research done using the DAiSEE dataset involved training and testing standard models like CNN-based models, but the results were not satisfactory according to industry standards. In this paper, a multi-level classification approach has been introduced to create a more robust model for engagement analysis using the DAiSEE dataset. This approach has recorded testing accuracies of 0.638, 0.7728, 0.8195, and 0.866 for predicting boredom level, engagement level, confusion level, and frustration level, respectively.

Keywords: computer vision, engagement prediction, deep learning, multi-level classification

Procedia PDF Downloads 101
12751 Legal Judgment Prediction through Indictments via Data Visualization in Chinese

Authors: Kuo-Chun Chien, Chia-Hui Chang, Ren-Der Sun

Abstract:

Legal Judgment Prediction (LJP) is a subtask for legal AI. Its main purpose is to use the facts of a case to predict the judgment result. In Taiwan's criminal procedure, when prosecutors complete the investigation of the case, they will decide whether to prosecute the suspect and which article of criminal law should be used based on the facts and evidence of the case. In this study, we collected 305,240 indictments from the public inquiry system of the procuratorate of the Ministry of Justice, which included 169 charges and 317 articles from 21 laws. We take the crime facts in the indictments as the main input to jointly learn the prediction model for law source, article, and charge simultaneously based on the pre-trained Bert model. For single article cases where the frequency of the charge and article are greater than 50, the prediction performance of law sources, articles, and charges reach 97.66, 92.22, and 60.52 macro-f1, respectively. To understand the big performance gap between articles and charges, we used a bipartite graph to visualize the relationship between the articles and charges, and found that the reason for the poor prediction performance was actually due to the wording precision. Some charges use the simplest words, while others may include the perpetrator or the result to make the charges more specific. For example, Article 284 of the Criminal Law may be indicted as “negligent injury”, "negligent death”, "business injury", "driving business injury", or "non-driving business injury". As another example, Article 10 of the Drug Hazard Control Regulations can be charged as “Drug Control Regulations” or “Drug Hazard Control Regulations”. In order to solve the above problems and more accurately predict the article and charge, we plan to include the article content or charge names in the input, and use the sentence-pair classification method for question-answer problems in the BERT model to improve the performance. We will also consider a sequence-to-sequence approach to charge prediction.

Keywords: legal judgment prediction, deep learning, natural language processing, BERT, data visualization

Procedia PDF Downloads 108
12750 Escalation of Commitment and Turnover in Top Management Teams

Authors: Dmitriy V. Chulkov

Abstract:

Escalation of commitment is defined as continuation of a project after receiving negative information about it. While literature in management and psychology identified various factors contributing to escalation behavior, this phenomenon has received little analysis in economics, potentially due to the apparent irrationality of escalation. In this study, we present an economic model of escalation with asymmetric information in a principal-agent setup where the agents are responsible for a project selection decision and discover the outcome of the project before the principal. Our theoretical model complements the existing literature on several accounts. First, we link the incentive to escalate commitment to a project with the turnover decision by the manager. When a manager learns the outcome of the project and stops it that reveals that a mistake was made. There is an incentive to continue failing projects and avoid admitting the mistake. This incentive is enhanced when the agent may voluntarily resign from the firm before the outcome of the failing project is revealed, and thus not bear the full extent of reputation damage due to project failure. As long as some successful managers leave the firm for extraneous reasons, outside firms find it difficult to link failing projects with certainty to managers that left a firm. Second, we demonstrate that non-CEO managers have reputation concerns separate from those of the CEO, and thus may escalate commitment to projects they oversee, when such escalation can attenuate damage to reputation from impending project failure. Such incentive for escalation will be present for non-CEO managers if the CEO delegates responsibility for a project to a non-CEO executive. If reputation matters for promotion to the CEO, the incentive for a rising executive to escalate in order to protect reputation is distinct from that of a CEO. Third, our theoretical model is supported by empirical analysis of changes in the firm’s operations measured by the presence of discontinued operations at the time of turnover among the top four members of the top management team. Discontinued operations are indicative of termination of failing projects at a firm. The empirical results demonstrate that in a large dataset of over three thousand publicly traded U.S. firms for a period from 1993 to 2014 turnover by top executives significantly increases the likelihood that the firm discontinues operations. Furthermore, the type of turnover matters as this effect is strongest when at least one non-CEO member of the top management team leaves the firm and when the CEO departure is due to a voluntary resignation and not to a retirement or illness. Empirical results are consistent with the predictions of the theoretical model and suggest that escalation of commitment is primarily observed in decisions by non-CEO members of the top management team.

Keywords: discontinued operations, escalation of commitment, executive turnover, top management teams

Procedia PDF Downloads 354
12749 Mechanical Properties of Die-Cast Nonflammable Mg Alloy

Authors: Myoung-Gon Yoon, Jung-Ho Moon, Tae Kwon Ha

Abstract:

Tensile specimens of nonflammable AZ91D Mg alloy were fabricated in this study via cold chamber die-casting process. Dimensions of tensile specimens were 25mm in length, 4mm in width, and 0.8 or 3.0mm in thickness. Microstructure observation was conducted before and after tensile tests at room temperature. In the die casting process, various injection distances from 150 to 260mm were employed to obtain optimum process conditions. Distribution of Al12Mg17 phase was the key factor to determine the mechanical properties of die-cast Mg alloy. Specimens with 3mm of thickness showed superior mechanical properties to those with 0.8mm of thickness. Closed networking of Al12Mg17 phase along grain boundary was found to be detrimental to mechanical properties of die-cast Mg alloy.

Keywords: non-flammable magnesium alloy, AZ91D, die-casting, microstructure, mechanical properties

Procedia PDF Downloads 294
12748 Cantilever Secant Pile Constructed in Sand: Capping Beam Analysis and Design - Part I

Authors: Khaled R. Khater

Abstract:

The paper theme is soil retaining structures. Cantilever secant-pile wall is triggering scientific point of curiosity. Specially the capping beams structural analysis and its interaction with secant piles as one integrated matrix. It is believed that straining actions of this integrated matrix are most probably induced due to a combination of induced line load and non-uniform horizontal pile tips displacement. The strategy that followed throughout this study starts by converting the pile head horizontal displacements generated by Plaxis-2D model to a system of concentrated line load acting per meter run along the capping beam. Then, those line loads are the input data of Staad-Pro 3D-model. Those models tailored to allow the capping beam and the secant piles interacting as one matrix, i.e. a unit. It is believed that the suggested strategy presents close to real structural simulation. The above is the paper thought and methodology. Three sand densities, one pile rigidity and one excavation depth, “h = 4.0-m,” are completely sufficient to achieve the paper’s objective.

Keywords: secant piles, capping beam, analysis, design, plaxis 2D, staad pro 3D

Procedia PDF Downloads 85