Search results for: most likelihood amplitude estimation (MLQAE)
674 The Effect of Mathematical Modeling of Damping on the Seismic Energy Demands
Authors: Selamawit Dires, Solomon Tesfamariam, Thomas Tannert
Abstract:
Modern earthquake engineering and design encompass performance-based design philosophy. The main objective in performance-based design is to achieve a system performing precisely to meet the design objectives so to reduce unintended seismic risks and associated losses. Energy-based earthquake-resistant design is one of the design methodologies that can be implemented in performance-based earthquake engineering. In energy-based design, the seismic demand is usually described as the ratio of the hysteretic to input energy. Once the hysteretic energy is known as a percentage of the input energy, it is distributed among energy-dissipating components of a structure. The hysteretic to input energy ratio is highly dependent on the inherent damping of a structural system. In numerical analysis, damping can be modeled as stiffness-proportional, mass-proportional, or a linear combination of stiffness and mass. In this study, the effect of mathematical modeling of damping on the estimation of seismic energy demands is investigated by considering elastic-perfectly-plastic single-degree-of-freedom systems representing short to long period structures. Furthermore, the seismicity of Vancouver, Canada, is used in the nonlinear time history analysis. According to the preliminary results, the input energy demand is not sensitive to the type of damping models deployed. Hence, consistent results are achieved regardless of the damping models utilized in the numerical analyses. On the other hand, the hysteretic to input energy ratios vary significantly for the different damping models.Keywords: damping, energy-based seismic design, hysteretic energy, input energy
Procedia PDF Downloads 168673 Estimated Human Absorbed Dose of 111 In-BPAMD as a New Bone-Seeking Spect-Imaging Agent
Authors: H. Yousefnia, S. Zolghadri
Abstract:
An early diagnosis of bone metastases is very important for providing a profound decision on a subsequent therapy. A prerequisite for the clinical application of new diagnostic radiopharmaceutical is the measurement of organ radiation exposure dose from biodistribution data in animals. In this study, the dosimetric studies of a novel agent for SPECT-imaging of bone methastases, 111In-(4-{[(bis(phosphonomethyl))carbamoyl]methyl}-7,10-bis(carboxymethyl)-1,4,7,10-tetraazacyclododec-1-yl) acetic acid (111In-BPAMD) complex, have been estimated in human organs based on mice data. The radiolabeled complex was prepared with high radiochemical purity at the optimal conditions. Biodistribution studies of the complex were investigated in male Syrian mice at selected times after injection (2, 4, 24 and 48 h). The human absorbed dose estimation of the complex was performed based on mice data by the radiation absorbed dose assessment resource (RADAR) method. 111In-BPAMD complex was prepared with high radiochemical purity >95% (ITLC) and specific activities of 2.85 TBq/mmol. Total body effective absorbed dose for 111In-BPAMD was 0.205 mSv/MBq. This value is comparable to the other 111In clinically used complexes. The results show that the dose to critical organs the complex is well within the acceptable considered range for diagnostic nuclear medicine procedures. Generally, 111In-BPAMD has interesting characteristics and can be considered as a viable agent for SPECT-imaging of the bone metastases in the near future.Keywords: In-111, BPAMD, absorbed dose, RADAR
Procedia PDF Downloads 482672 Analysis of Accurate Direct-Estimation of the Maximum Power Point and Thermal Characteristics of High Concentration Photovoltaic Modules
Authors: Yan-Wen Wang, Chu-Yang Chou, Jen-Cheng Wang, Min-Sheng Liao, Hsuan-Hsiang Hsu, Cheng-Ying Chou, Chen-Kang Huang, Kun-Chang Kuo, Joe-Air Jiang
Abstract:
Performance-related parameters of high concentration photovoltaic (HCPV) modules (e.g. current and voltage) are required when estimating the maximum power point using numerical and approximation methods. The maximum power point on the characteristic curve for a photovoltaic module varies when temperature or solar radiation is different. It is also difficult to estimate the output performance and maximum power point (MPP) due to the special characteristics of HCPV modules. Based on the p-n junction semiconductor theory, a brand new and simple method is presented in this study to directly evaluate the MPP of HCPV modules. The MPP of HCPV modules can be determined from an irradiated I-V characteristic curve, because there is a non-linear relationship between the temperature of a solar cell and solar radiation. Numerical simulations and field tests are conducted to examine the characteristics of HCPV modules during maximum output power tracking. The performance of the presented method is evaluated by examining the dependence of temperature and irradiation intensity on the MPP characteristics of HCPV modules. These results show that the presented method allows HCPV modules to achieve their maximum power and perform power tracking under various operation conditions. A 0.1% error is found between the estimated and the real maximum power point.Keywords: energy performance, high concentrated photovoltaic, maximum power point, p-n junction semiconductor
Procedia PDF Downloads 584671 Work Related Musculoskeletal Disorder: A Case Study of Office Computer Users in Nigerian Content Development and Monitoring Board, Yenagoa, Bayelsa State, Nigeria
Authors: Tamadu Perry Egedegu
Abstract:
Rapid growth in the use of electronic data has affected both the employee and work place. Our experience shows that jobs that have multiple risk factors have a greater likelihood of causing Work Related Musculoskeletal Disorder (WRMSDs), depending on the duration, frequency and/or magnitude of exposure to each. The study investigated musculoskeletal disorder among office workers. Thus, it is important that ergonomic risk factors be considered in light of their combined effect in causing or contributing to WRMSDs. Fast technological growth in the use of electronic system; have affected both workers and the work environment. Awkward posture and long hours in front of these visual display terminals can result in work-related musculoskeletal disorders (WRMSD). The study shall contribute to the awareness creation on the causes and consequences of WRMSDs due to lack of ergonomics training. The study was conducted using an observational cross-sectional design. A sample of 109 respondents was drawn from the target population through purposive sampling method. The sources of data were both primary and secondary. Primary data were collected through questionnaires and secondary data were sourced from journals, textbooks, and internet materials. Questionnaires were the main instrument for data collection and were designed in a YES or NO format according to the study objectives. Content validity approval was used to ensure that the variables were adequately covered. The reliability of the instrument was done through test-retest method, yielding a reliability index at 0.84. The data collected from the field were analyzed with a descriptive statistics of chart, percentage and mean. The study found that the most affected body regions were the upper back, followed by the lower back, neck, wrist, shoulder and eyes, while the least affected body parts were the knee calf and the ankle. Furthermore, the prevalence of work-related 'musculoskeletal' malfunctioning was linked with long working hours (6 - 8 hrs.) per day, lack of back support on their seats, glare on the monitor, inadequate regular break, repetitive motion of the upper limbs, and wrist when using the computer. Finally, based on these findings some recommendations were made to reduce the prevalent of WRMSDs among office workers.Keywords: work related musculoskeletal disorder, Nigeria, office computer users, ergonomic risk factor
Procedia PDF Downloads 241670 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique
Procedia PDF Downloads 594669 An Automatic Bayesian Classification System for File Format Selection
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.Keywords: data mining, digital libraries, digital preservation, file format
Procedia PDF Downloads 499668 Intercultural Trainings for Future Global Managers: Evaluating the Effect on the Global Mind-Set
Authors: Nina Dziatzko, Christopher Stehr, Franziska Struve
Abstract:
Intercultural competence as an explicit required skill nearly never appears in job advertisements in international or even global contexts. But especially those who have to deal with different nationalities and cultures in their everyday business need to have several intercultural competencies and further a global mind-set. This way the question arises how potential future global managers can be trained to learn these competencies. In this regard, it might be helpful to see if different types of intercultural trainings have different effects on those skills. This paper outlines lessons learned based on the evaluation of two different intercultural trainings for management students. The main differences between the observed intercultural trainings are the amount of theoretical input in relation to hands-on experiences, the number of trainers as well as the used methods to teach implicit cultural rules. Both groups contain management students with the willingness and perspective to work abroad or to work in international context. The research is carried out with a pre-training-survey and a post-training-survey which consists of questions referring the international context of the students and a self-estimation of 19 identified intercultural and global mind-set skills, such as: cosmopolitanism, empathy, differentiation and adaptability. Whereas there is no clear result which training gets overall a significant higher increase of skills, there is a clear difference between the focus of competencies trained by each of the intercultural trainings. This way this research provides a guideline for both academicals institutions as well as companies for the decision between different types of intercultural trainings, if the to be trained required skills are defined. Therefore the efficiency and the accuracy of fit of the education of future global managers get optimized.Keywords: global mind-set, intercultural competencies, intercultural training, learning experiences
Procedia PDF Downloads 277667 AI Applications in Accounting: Transforming Finance with Technology
Authors: Alireza Karimi
Abstract:
Artificial Intelligence (AI) is reshaping various industries, and accounting is no exception. With the ability to process vast amounts of data quickly and accurately, AI is revolutionizing how financial professionals manage, analyze, and report financial information. In this article, we will explore the diverse applications of AI in accounting and its profound impact on the field. Automation of Repetitive Tasks: One of the most significant contributions of AI in accounting is automating repetitive tasks. AI-powered software can handle data entry, invoice processing, and reconciliation with minimal human intervention. This not only saves time but also reduces the risk of errors, leading to more accurate financial records. Pattern Recognition and Anomaly Detection: AI algorithms excel at pattern recognition. In accounting, this capability is leveraged to identify unusual patterns in financial data that might indicate fraud or errors. AI can swiftly detect discrepancies, enabling auditors and accountants to focus on resolving issues rather than hunting for them. Real-Time Financial Insights: AI-driven tools, using natural language processing and computer vision, can process documents faster than ever. This enables organizations to have real-time insights into their financial status, empowering decision-makers with up-to-date information for strategic planning. Fraud Detection and Prevention: AI is a powerful tool in the fight against financial fraud. It can analyze vast transaction datasets, flagging suspicious activities and reducing the likelihood of financial misconduct going unnoticed. This proactive approach safeguards a company's financial integrity. Enhanced Data Analysis and Forecasting: Machine learning, a subset of AI, is used for data analysis and forecasting. By examining historical financial data, AI models can provide forecasts and insights, aiding businesses in making informed financial decisions and optimizing their financial strategies. Artificial Intelligence is fundamentally transforming the accounting profession. From automating mundane tasks to enhancing data analysis and fraud detection, AI is making financial processes more efficient, accurate, and insightful. As AI continues to evolve, its role in accounting will only become more significant, offering accountants and finance professionals powerful tools to navigate the complexities of modern finance. Embracing AI in accounting is not just a trend; it's a necessity for staying competitive in the evolving financial landscape.Keywords: artificial intelligence, accounting automation, financial analysis, fraud detection, machine learning in finance
Procedia PDF Downloads 63666 Estimation of Elastic Modulus of Soil Surrounding Buried Pipeline Using Multi-Response Surface Methodology
Authors: Won Mog Choi, Seong Kyeong Hong, Seok Young Jeong
Abstract:
The stress on the buried pipeline under pavement is significantly affected by vehicle loads and elastic modulus of the soil surrounding the pipeline. The correct elastic modulus of soil has to be applied to the finite element model to investigate the effect of the vehicle loads on the buried pipeline using finite element analysis. The purpose of this study is to establish the approach to calculating the correct elastic modulus of soil using the optimization process. The optimal elastic modulus of soil, which minimizes the difference between the strain measured from vehicle driving test at the velocity of 35km/h and the strain calculated from finite element analyses, was calculated through the optimization process using multi-response surface methodology. Three elastic moduli of soil (road layer, original soil, dense sand) surrounding the pipeline were defined as the variables for the optimization. Further analyses with the optimal elastic modulus at the velocities of 4.27km/h, 15.47km/h, 24.18km/h were performed and compared to the test results to verify the applicability of multi-response surface methodology. The results indicated that the strain of the buried pipeline was mostly affected by the elastic modulus of original soil, followed by the dense sand and the load layer, as well as the results of further analyses with optimal elastic modulus of soil show good agreement with the test.Keywords: pipeline, optimization, elastic modulus of soil, response surface methodology
Procedia PDF Downloads 386665 Techno-Economic Analysis of the Production of Aniline
Authors: Dharshini M., Hema N. S.
Abstract:
The project for the production of aniline is done by providing 295.46 tons per day of nitrobenzene as feed. The material and energy balance calculations for the different equipment like distillation column, heat exchangers, reactor and mixer are carried out with simulation via DWSIM. The conversion of nitrobenzene to aniline by hydrogenation process is considered to be 96% and the total production of the plant was found to be 215 TPD. The cost estimation of the process is carried out to estimate the feasibility of the plant. The net profit and percentage return of investment is estimated to be ₹27 crores and 24.6%. The payback period was estimated to be 4.05 years and the unit production cost is ₹113/kg. A techno-economic analysis was performed for the production of aniline; the result includes economic analysis and sensitivity analysis of critical factors. From economic analysis, larger the plant scale increases the total capital investment and annual operating cost, even though the unit production cost decreases. Uncertainty analysis was performed to predict the influence of economic factors on profitability and the scenario analysis is one way to quantify uncertainty. In scenario analysis the best-case scenario and the worst-case scenario are compared with the base case scenario. The best-case scenario was found at a feed rate of 120 kmol/hr with a unit production cost of ₹112.05/kg and the worst-case scenario was found at a feed rate of 60 kmol/hr with a unit production cost of ₹115.9/kg. The base case is closely related to the best case by 99.2% in terms of unit production cost. since the unit production cost is less and the profitability is more with less payback time, it is feasible to construct a plant at this capacity.Keywords: aniline, nitrobenzene, economic analysis, unit production cost
Procedia PDF Downloads 109664 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image
Authors: Salah Abdul Hameed Saleh, Ghada Hasan
Abstract:
The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area
Procedia PDF Downloads 442663 Transmission of Values among Polish Young Adults and Their Parents: Pseudo Dyad Analysis and Gender Differences
Authors: Karolina Pietras, Joanna Fryt, Aleksandra Gronostaj, Tomasz Smolen
Abstract:
Young women and men differ from their parents in preferred values. Those differences enable their adaptability to a new socio-cultural context and help with fulfilling developmental tasks specific to young adulthood. At the same time core values, with special importance to family members, are transmitted within families. Intergenerational similarities in values may thus be both an effect of value transmission within a family and a consequence of sharing the same socio-cultural context. These processes are difficult to separate. In our study we assessed similarities and differences in values within four intergenerational family dyads (mothers-daughters, fathers-daughters, mothers-sons, fathers-sons). Sixty Polish young adults (30 women and 30 men aged 19-25) along with their parents (a total of 180 participants) completed the Schwartz’ Portrait Value Questionnaire (PVQ-21). To determine which values may be transmitted within families, we used a correlation analysis and pseudo dyad analysis that allows for the estimation of a baseline likeness between all tested subjects and consequently makes it possible to determine if similarities between actual family members are greater than chance. We also assessed whether different strategies of measuring similarity between family members render different results, and checked whether resemblances in family dyads are influenced by child’s and parent’s gender. Reported similarities were interpreted in light of the evolutionary and the value salience perspective.Keywords: intergenerational differences in values, gender differences, pseudo dyad analysis, transmission of values
Procedia PDF Downloads 502662 Call-Back Laterality and Bilaterality: Possible Screening Mammography Quality Metrics
Authors: Samson Munn, Virginia H. Kim, Huija Chen, Sean Maldonado, Michelle Kim, Paul Koscheski, Babak N. Kalantari, Gregory Eckel, Albert Lee
Abstract:
In terms of screening mammography quality, neither the portion of reports that advise call-back imaging that should be bilateral versus unilateral nor how much the unilateral call-backs may appropriately diverge from 50–50 (left versus right) is known. Many factors may affect detection laterality: display arrangement, reflections preferentially striking one display location, hanging protocols, seating positions with respect to others and displays, visual field cuts, health, etc. The call-back bilateral fraction may reflect radiologist experience (not in our data) or confidence level. Thus, laterality and bilaterality of call-backs advised in screening mammography reports could be worthy quality metrics. Here, laterality data did not reveal a concern until drilling down to individuals. Bilateral screening mammogram report recommendations by five breast imaging, attending radiologists at Harbor-UCLA Medical Center (Torrance, California) 9/1/15--8/31/16 and 9/1/16--8/31/17 were retrospectively reviewed. Recommended call-backs for bilateral versus unilateral, and for left versus right, findings were counted. Chi-square (χ²) statistic was applied. Year 1: of 2,665 bilateral screening mammograms, reports of 556 (20.9%) recommended call-back, of which 99 (17.8% of the 556) were for bilateral findings. Of the 457 unilateral recommendations, 222 (48.6%) regarded the left breast. Year 2: of 2,106 bilateral screening mammograms, reports of 439 (20.8%) recommended call-back, of which 65 (14.8% of the 439) were for bilateral findings. Of the 374 unilateral recommendations, 182 (48.7%) regarded the left breast. Individual ranges of call-backs that were bilateral were 13.2–23.3%, 10.2–22.5%, and 13.6–17.9%, by year(s) 1, 2, and 1+2, respectively; these ranges were unrelated to experience level; the two-year mean was 15.8% (SD=1.9%). The lowest χ² p value of the group's sidedness disparities years 1, 2, and 1+2 was > 0.4. Regarding four individual radiologists, the lowest p value was 0.42. However, the fifth radiologist disfavored the left, with p values of 0.21, 0.19, and 0.07, respectively; that radiologist had the greatest number of years of experience. There was a concerning, 93% likelihood that bias against left breast findings evidenced by one of our radiologists was not random. Notably, very soon after the period under review, he retired, presented with leukemia, and died. We call for research to be done, particularly by large departments with many radiologists, of two possible, new, quality metrics in screening mammography: laterality and bilaterality. (Images, patient outcomes, report validity, and radiologist psychological confidence levels were not assessed. No intervention nor subsequent data collection was conducted. This uncomplicated collection of data and simple appraisal were not designed, nor had there been any intention to develop or contribute, to generalizable knowledge (per U.S. DHHS 45 CFR, part 46)).Keywords: mammography, screening mammography, quality, quality metrics, laterality
Procedia PDF Downloads 162661 Accuracy of Small Field of View CBCT in Determining Endodontic Working Length
Authors: N. L. S. Ahmad, Y. L. Thong, P. Nambiar
Abstract:
An in vitro study was carried out to evaluate the feasibility of small field of view (FOV) cone beam computed tomography (CBCT) in determining endodontic working length. The objectives were to determine the accuracy of CBCT in measuring the estimated preoperative working lengths (EPWL), endodontic working lengths (EWL) and file lengths. Access cavities were prepared in 27 molars. For each root canal, the baseline electronic working length was determined using an EAL (Raypex 5). The teeth were then divided into overextended, non-modified and underextended groups and the lengths were adjusted accordingly. Imaging and measurements were made using the respective software of the RVG (Kodak RVG 6100) and CBCT units (Kodak 9000 3D). Root apices were then shaved and the apical constrictions viewed under magnification to measure the control working lengths. The paired t-test showed a statistically significant difference between CBCT EPWL and control length but the difference was too small to be clinically significant. From the Bland Altman analysis, the CBCT method had the widest range of 95% limits of agreement, reflecting its greater potential of error. In measuring file lengths, RVG had a bigger window of 95% limits of agreement compared to CBCT. Conclusions: (1) The clinically insignificant underestimation of the preoperative working length using small FOV CBCT showed that it is acceptable for use in the estimation of preoperative working length. (2) Small FOV CBCT may be used in working length determination but it is not as accurate as the currently practiced method of using the EAL. (3) It is also more accurate than RVG in measuring file lengths.Keywords: accuracy, CBCT, endodontics, measurement
Procedia PDF Downloads 308660 Estimation of Geotechnical Parameters by Comparing Monitoring Data with Numerical Results: Case Study of Arash–Esfandiar-Niayesh Under-Passing Tunnel, Africa Tunnel, Tehran, Iran
Authors: Aliakbar Golshani, Seyyed Mehdi Poorhashemi, Mahsa Gharizadeh
Abstract:
The under passing tunnels are strongly influenced by the soils around. There are some complexities in the specification of real soil behavior, owing to the fact that lots of uncertainties exist in soil properties, and additionally, inappropriate soil constitutive models. Such mentioned factors may cause incompatible settlements in numerical analysis with the obtained values in actual construction. This paper aims to report a case study on a specific tunnel constructed by NATM. The tunnel has a depth of 11.4 m, height of 12.2 m, and width of 14.4 m with 2.5 lanes. The numerical modeling was based on a 2D finite element program. The soil material behavior was modeled by hardening soil model. According to the field observations, the numerical estimated settlement at the ground surface was approximately four times more than the measured one, after the entire installation of the initial lining, indicating that some unknown factors affect the values. Consequently, the geotechnical parameters are accurately revised by a numerical back-analysis using laboratory and field test data and based on the obtained monitoring data. The obtained result confirms that typically, the soil parameters are conservatively low-estimated. And additionally, the constitutive models cannot be applied properly for all soil conditions.Keywords: NATM tunnel, initial lining, laboratory test data, numerical back-analysis
Procedia PDF Downloads 361659 Comparative Fragility Analysis of Shallow Tunnels Subjected to Seismic and Blast Loads
Authors: Siti Khadijah Che Osmi, Mohammed Ahmad Syed
Abstract:
Underground structures are crucial components which required detailed analysis and design. Tunnels, for instance, are massively constructed as transportation infrastructures and utilities network especially in urban environments. Considering their prime importance to the economy and public safety that cannot be compromised, thus any instability to these tunnels will be highly detrimental to their performance. Recent experience suggests that tunnels become vulnerable during earthquakes and blast scenarios. However, a very limited amount of studies has been carried out to study and understanding the dynamic response and performance of underground tunnels under those unpredictable extreme hazards. In view of the importance of enhancing the resilience of these structures, the overall aims of the study are to evaluate probabilistic future performance of shallow tunnels subjected to seismic and blast loads by developing detailed fragility analysis. Critical non-linear time history numerical analyses using sophisticated finite element software Midas GTS NX have been presented about the current methods of analysis, taking into consideration of structural typology, ground motion and explosive characteristics, effect of soil conditions and other associated uncertainties on the tunnel integrity which may ultimately lead to the catastrophic failure of the structures. The proposed fragility curves for both extreme loadings are discussed and compared which provide significant information the performance of the tunnel under extreme hazards which may beneficial for future risk assessment and loss estimation.Keywords: fragility analysis, seismic loads, shallow tunnels, blast loads
Procedia PDF Downloads 343658 Validation of SWAT Model for Prediction of Water Yield and Water Balance: Case Study of Upstream Catchment of Jebba Dam in Nigeria
Authors: Adeniyi G. Adeogun, Bolaji F. Sule, Adebayo W. Salami, Michael O. Daramola
Abstract:
Estimation of water yield and water balance in a river catchment is critical to the sustainable management of water resources at watershed level in any country. Therefore, in the present study, Soil and Water Assessment Tool (SWAT) interfaced with Geographical Information System (GIS) was applied as a tool to predict water balance and water yield of a catchment area in Nigeria. The catchment area, which was 12,992km2, is located upstream Jebba hydropower dam in North central part of Nigeria. In this study, data on the observed flow were collected and compared with simulated flow using SWAT. The correlation between the two data sets was evaluated using statistical measures, such as, Nasch-Sucliffe Efficiency (NSE) and coefficient of determination (R2). The model output shows a good agreement between the observed flow and simulated flow as indicated by NSE and R2, which were greater than 0.7 for both calibration and validation period. A total of 42,733 mm of water was predicted by the calibrated model as the water yield potential of the basin for a simulation period 1985 to 2010. This interesting performance obtained with SWAT model suggests that SWAT model could be a promising tool to predict water balance and water yield in sustainable management of water resources. In addition, SWAT could be applied to other water resources in other basins in Nigeria as a decision support tool for sustainable water management in Nigeria.Keywords: GIS, modeling, sensitivity analysis, SWAT, water yield, watershed level
Procedia PDF Downloads 439657 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique
Authors: Mohammad A. Khasawneh
Abstract:
Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure. The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab. Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.Keywords: friction, image analysis, polishing, statistical analysis, texture
Procedia PDF Downloads 306656 Attention States in the Sustained Attention to Response Task: Effects of Trial Duration, Mind-Wandering and Focus
Authors: Aisling Davies, Ciara Greene
Abstract:
Over the past decade the phenomenon of mind-wandering in cognitive tasks has attracted widespread scientific attention. Research indicates that mind-wandering occurrences can be detected through behavioural responses in the Sustained Attention to Response Task (SART) and several studies have attributed a specific pattern of responding around an error in this task to an observable effect of a mind-wandering state. SART behavioural responses are also widely accepted as indices of sustained attention and of general attention lapses. However, evidence suggests that these same patterns of responding may be attributable to other factors associated with more focused states and that it may also be possible to distinguish the two states within the same task. To use behavioural responses in the SART to study mind-wandering, it is essential to establish both the SART parameters that would increase the likelihood of errors due to mind-wandering, and exactly what type of responses are indicative of mind-wandering, neither of which have yet been determined. The aims of this study were to compare different versions of the SART to establish which task would induce the most mind-wandering episodes and to determine whether mind-wandering related errors can be distinguished from errors during periods of focus, by behavioural responses in the SART. To achieve these objectives, 25 Participants completed four modified versions of the SART that differed from the classic paradigm in several ways so to capture more instances of mind-wandering. The duration that trials were presented for was increased proportionately across each of the four versions of the task; Standard, Medium Slow, Slow, and Very Slow and participants intermittently responded to thought probes assessing their level of focus and degree of mind-wandering throughout. Error rates, reaction times and variability in reaction times decreased in proportion to the decrease in trial duration rate and the proportion of mind-wandering related errors increased, until the Very Slow condition where the extra decrease in duration no longer had an effect. Distinct reaction time patterns around an error, dependent on level of focus (high/low) and level of mind-wandering (high/low) were also observed indicating four separate attention states occurring within the SART. This study establishes the optimal duration of trial presentation for inducing mind-wandering in the SART, provides evidence supporting the idea that different attention states can be observed within the SART and highlights the importance of addressing other factors contributing to behavioural responses when studying mind-wandering during this task. A notable finding in relation to the standard SART, was that while more errors were observed in this version of the task, most of these errors were during periods of focus, raising significant questions about our current understanding of mind-wandering and associated failures of attention.Keywords: attention, mind-wandering, trial duration rate, Sustained Attention to Response Task (SART)
Procedia PDF Downloads 182655 Optimization of Economic Order Quantity of Multi-Item Inventory Control Problem through Nonlinear Programming Technique
Authors: Prabha Rohatgi
Abstract:
To obtain an efficient control over a huge amount of inventory of drugs in pharmacy department of any hospital, generally, the medicines are categorized on the basis of their cost ‘ABC’ (Always Better Control), first and then categorize on the basis of their criticality ‘VED’ (Vital, Essential, desirable) for prioritization. About one-third of the annual expenditure of a hospital is spent on medicines. To minimize the inventory investment, the hospital management may like to keep the medicines inventory low, as medicines are perishable items. The main aim of each and every hospital is to provide better services to the patients under certain limited resources. To achieve the satisfactory level of health care services to outdoor patients, a hospital has to keep eye on the wastage of medicines because expiry date of medicines causes a great loss of money though it was limited and allocated for a particular period of time. The objectives of this study are to identify the categories of medicines requiring incentive managerial control. In this paper, to minimize the total inventory cost and the cost associated with the wastage of money due to expiry of medicines, an inventory control model is used as an estimation tool and then nonlinear programming technique is used under limited budget and fixed number of orders to be placed in a limited time period. Numerical computations have been given and shown that by using scientific methods in hospital services, we can give more effective way of inventory management under limited resources and can provide better health care services. The secondary data has been collected from a hospital to give empirical evidence.Keywords: ABC-VED inventory classification, multi item inventory problem, nonlinear programming technique, optimization of EOQ
Procedia PDF Downloads 255654 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records
Authors: Sara ElElimy, Samir Moustafa
Abstract:
Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).Keywords: big data analytics, machine learning, CDRs, 5G
Procedia PDF Downloads 139653 Prevalence of Workplace Bullying in Hong Kong: A Latent Class Analysis
Authors: Catalina Sau Man Ng
Abstract:
Workplace bullying is generally defined as a form of direct and indirect maltreatment at work including harassing, offending, socially isolating someone or negatively affecting someone’s work tasks. Workplace bullying is unfortunately commonplace around the world, which makes it a social phenomenon worth researching. However, the measurements and estimation methods of workplace bullying seem to be diverse in different studies, leading to dubious results. Hence, this paper attempts to examine the prevalence of workplace bullying in Hong Kong using the latent class analysis approach. It is often argued that the traditional classification of workplace bullying into the dichotomous 'victims' and 'non-victims' may not be able to fully represent the complex phenomenon of bullying. By treating workplace bullying as one latent variable and examining the potential categorical distribution within the latent variable, a more thorough understanding of workplace bullying in real-life situations may hence be provided. As a result, this study adopts a latent class analysis method, which was tested to demonstrate higher construct and higher predictive validity previously. In the present study, a representative sample of 2814 employees (Male: 54.7%, Female: 45.3%) in Hong Kong was recruited. The participants were asked to fill in a self-reported questionnaire which included measurements such as Chinese Workplace Bullying Scale (CWBS) and Chinese Version of Depression Anxiety Stress Scale (DASS). It is estimated that four latent classes will emerge: 'non-victims', 'seldom bullied', 'sometimes bullied', and 'victims'. The results of each latent class and implications of the study will also be discussed in this working paper.Keywords: latent class analysis, prevalence, survey, workplace bullying
Procedia PDF Downloads 330652 Nude Cosmetic Water-Rich Compositions for Skin Care and Consumer Emotions
Authors: Emmanuelle Merat, Arnaud Aubert, Sophie Cambos, Francis Vial, Patrick Beau
Abstract:
Basically, consumers are sensitive to many stimuli when applying a cream: brand, packaging and indeed formulation compositions. Many studies demonstrated the influence of some stimuli such as brand, packaging, formula color and odor (e.g. in make-up applications). Those parameters influence perceived quality of the product. The objective of this work is to further investigate the relationship between nude skincare basic compositions with different textures and consumer experience. A tentative final step will be to connect the consumer feelings with key ingredients in the compositions. A new approach was developed to better understand touch-related subjective experience in consumers based on a combination of methods: sensory analysis with ten experts, preference mapping on one hundred female consumers and emotional assessments on thirty consumers (verbal and non-verbal through prosody and gesture monitoring). Finally, a methodology based on ‘sensorial trip’ (after olfactory, haptic and musical stimuli) has been experimented on the most interesting textures with 10 consumers. The results showed more or less impact depending on compositions and also on key ingredients. Three types of formulation particularly attracted the consumer: an aqueous gel, an oil-in-water emulsion, and a patented gel-in-oil formulation type. Regarding these three formulas, the preferences were both revealed through sensory and emotion tests. One was recognized as the most innovative in consumer sensory test whereas the two other formulas were discriminated in emotions evaluation. The positive emotions were highlighted especially in prosody criteria. The non-verbal analysis, which corresponds to the physical parameters of the voice, showed high pitch and amplitude values; linked to positive emotions. Verbatim, verbal content of responses (i.e., ideas, concepts, mental images), confirmed the first conclusion. On the formulas selected for their positive emotions generation, the ‘sensorial trip’ provided complementary information to characterize each emotional profile. In the second step, dedicated to better understand ingredients power, two types of ingredients demonstrated an obvious input on consumer preference: rheology modifiers and emollients. As a conclusion, nude cosmetic compositions with well-chosen textures and ingredients can positively stimulate consumer emotions contributing to capture their preference. For a complete achievement of the study, a global approach (Asia, America territories...) should be developed.Keywords: sensory, emotion, cosmetic formulations, ingredients' influence
Procedia PDF Downloads 179651 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials
Authors: Rajesh Kumar G
Abstract:
A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.Keywords: adaptive design, simulation, borrowing data, bayesian model
Procedia PDF Downloads 77650 Application of Nonparametric Geographically Weighted Regression to Evaluate the Unemployment Rate in East Java
Authors: Sifriyani Sifriyani, I Nyoman Budiantara, Sri Haryatmi, Gunardi Gunardi
Abstract:
East Java Province has a first rank as a province that has the most counties and cities in Indonesia and has the largest population. In 2015, the population reached 38.847.561 million, this figure showed a very high population growth. High population growth is feared to lead to increase the levels of unemployment. In this study, the researchers mapped and modeled the unemployment rate with 6 variables that were supposed to influence. Modeling was done by nonparametric geographically weighted regression methods with truncated spline approach. This method was chosen because spline method is a flexible method, these models tend to look for its own estimation. In this modeling, there were point knots, the point that showed the changes of data. The selection of the optimum point knots was done by selecting the most minimun value of Generalized Cross Validation (GCV). Based on the research, 6 variables were declared to affect the level of unemployment in eastern Java. They were the percentage of population that is educated above high school, the rate of economic growth, the population density, the investment ratio of total labor force, the regional minimum wage and the ratio of the number of big industry and medium scale industry from the work force. The nonparametric geographically weighted regression models with truncated spline approach had a coefficient of determination 98.95% and the value of MSE equal to 0.0047.Keywords: East Java, nonparametric geographically weighted regression, spatial, spline approach, unemployed rate
Procedia PDF Downloads 321649 The Signaling Power of ESG Accounting in Sub-Sahara Africa: A Dynamic Model Approach
Authors: Haruna Maama
Abstract:
Environmental, social and governance (ESG) reporting is gaining considerable attention despite being voluntary. Meanwhile, it consumes resources to provide ESG reporting, raising a question of its value relevance. The study examined the impact of ESG reporting on the market value of listed firms in SSA. The annual and integrated reports of 276 listed sub-Sahara Africa (SSA) firms. The integrated reporting scores of the firm were analysed using a content analysis method. A multiple regression estimation technique using a GMM approach was employed for the analysis. The results revealed that ESG has a positive relationship with firms’ market value, suggesting that investors are interested in the ESG information disclosure of firms in SSA. This suggests that extensive ESG disclosures are attempts by firms to obtain the approval of powerful social, political and environmental stakeholders, especially institutional investors. Furthermore, the market value analysis evidence is consistent with signalling theory, which postulates that firms provide integrated reports as a signal to influence the behaviour of stakeholders. This finding reflects the value placed on investors' social, environmental and governance disclosures, which affirms the views that conventional investors would care about the social, environmental and governance issues of their potential or existing investee firms. Overall, the evidence is consistent with the prediction of signalling theory. In the context of this theory, integrated reporting is seen as part of firms' overall competitive strategy to influence investors' behaviour. The findings of this study make unique contributions to knowledge and practice in corporate reporting.Keywords: environmental accounting, ESG accounting, signalling theory, sustainability reporting, sub-saharan Africa
Procedia PDF Downloads 77648 Uncertainty Assessment in Building Energy Performance
Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud
Abstract:
The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method
Procedia PDF Downloads 459647 Factors of Adoption of the International Financial Reporting Standard for Small and Medium Sized Entities
Authors: Uyanga Jadamba
Abstract:
Globalisation of the world economy has necessitated the development and implementation of a comparable and understandable reporting language suitable for use by all reporting entities. The International Accounting Standard Board (IASB) provides an international reporting language that lets all users understand the financial information of their business and potentially allows them to have access to finance at an international level. The study is based on logistic regression analysis to investigate the factors for the adoption of theInternational Financial Reporting Standard for Small and Medium sized Entities (IFRS for SMEs). The study started with a list of 217 countries from World Bank data. Due to the lack of availability of data, the final sample consisted of 136 countries, including 60 countries that have adopted the IFRS for SMEs and 76 countries that have not adopted it yet. As a result, the study included a period from 2010 to 2020 and obtained 1360 observations. The findings confirm that the adoption of the IFRS for SMEs is significantly related to the existence of national reporting standards, law enforcement quality, common law (legal system), and extent of disclosure. It means that the likelihood of adoption of the IFRS for SMEs decreases if the country already has a national reporting standard for SMEs, which suggests that implementation and transitional costs are relatively high in order to change the reporting standards. The result further suggests that the new standard adoption is easier in countries with constructive law enforcement and effective application of laws. The finding also shows that the adoption increases if countries have a common law system which suggests that efficient reportingregulations are more widespread in these countries. Countries with a high extent of disclosing their financial information are more likely to adopt the standard than others. The findings lastly show that the audit qualityand primary education levelhave no significant impact on the adoption.One possible explanation for this could be that accounting professionalsfrom in developing countries lacked complete knowledge of the international reporting standards even though there was a requirement to comply with them. The study contributes to the literature by providing factors that impact the adoption of the IFRS for SMEs. It helps policymakers to better understand and apply the standard to improve the transparency of financial statements. The benefit of adopting the IFRS for SMEs is significant due to the relaxed and tailored reporting requirements for SMEs, reduced burden on professionals to comply with the standard, and provided transparent financial information to gain access to finance.The results of the study are useful toemerging economies where SMEs are dominant in the economy in informing its evaluation of the adoption of the IFRS for SMEs.Keywords: IFRS for SMEs, international financial reporting standard, adoption, institutional factors
Procedia PDF Downloads 81646 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test
Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman
Abstract:
At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. These findings need to be confirmed with a greater number of stations across other Australian states.Keywords: floods, FLIKE, probability distributions, flood frequency, outlier
Procedia PDF Downloads 450645 The Effects of Time and Cyclic Loading to the Axial Capacity for Offshore Pile in Shallow Gas
Authors: Christian H. Girsang, M. Razi B. Mansoor, Noorizal N. Huang
Abstract:
An offshore platform was installed in 1977 at about 260km offshore West Malaysia at the water depth of 73.6m. Twelve (12) piles were installed with four (4) are skirt piles. The piles have 1.219m outside diameter and wall thickness of 31mm and were driven to 109m below seabed. Deterministic analyses of the pile capacity under axial loading were conducted using the current API (American Petroleum Institute) method and the four (4) CPT-based methods: the ICP (Imperial College Pile)-method, the NGI (Norwegian Geotechnical Institute)-Method, the UWA (University of Western Australia)-method and the Fugro-method. A statistical analysis of the model uncertainty associated with each pile capacity method was performed. There were two (2) piles analysed: Pile 1 and piles other than Pile 1, where Pile 1 is the pile that was most affected by shallow gas problems. Using the mean estimate of soil properties, the five (5) methods used for deterministic estimation of axial pile capacity in compression predict an axial capacity from 28 to 42MN for Pile 1 and 32 to 49MN for piles other than Pile 1. These values refer to the static capacity shortly after pile installation. They do not include the effects of cyclic loading during the design storm or time after installation on the axial pile capacity. On average, the axial pile capacity is expected to have increased by about 40% because of ageing since the installation of the platform in 1977. On the other hand, the cyclic loading effects during the design storm may reduce the axial capacity of the piles by around 25%. The study concluded that all piles have sufficient safety factor when the pile aging and cyclic loading effect are considered, as all safety factors are above 2.0 for maximum operating and storm loads.Keywords: axial capacity, cyclic loading, pile ageing, shallow gas
Procedia PDF Downloads 345