Search results for: analytical validation
3145 Experimental and Analytical Studies for the Effect of Thickness and Axial Load on Load-Bearing Capacity of Fire-Damaged Concrete Walls
Authors: Yeo Kyeong Lee, Ji Yeon Kang, Eun Mi Ryu, Hee Sun Kim, Yeong Soo Shin
Abstract:
The objective of this paper is an investigation of the effects of the thickness and axial loading during a fire test on the load-bearing capacity of a fire-damaged normal-strength concrete wall. Two factors are attributed to the temperature distributions in the concrete members and are mainly obtained through numerous experiments. Toward this goal, three wall specimens of different thicknesses are heated for 2 h according to the ISO-standard heating curve, and the temperature distributions through the thicknesses are measured using thermocouples. In addition, two wall specimens are heated for 2 h while simultaneously being subjected to a constant axial loading at their top sections. The test results show that the temperature distribution during the fire test depends on wall thickness and axial load during the fire test. After the fire tests, the specimens are cured for one month, followed by the loading testing. The heated specimens are compared with three unheated specimens to investigate the residual load-bearing capacities. The fire-damaged walls show a minor difference of the load-bearing capacity regarding the axial loading, whereas a significant difference became evident regarding the wall thickness. To validate the experiment results, finite element models are generated for which the material properties that are obtained for the experiment are subject to elevated temperatures, and the analytical results show sound agreements with the experiment results. The analytical method based on validated thought experimental results is applied to generate the fire-damaged walls with 2,800 mm high considering the buckling effect: typical story height of residual buildings in Korea. The models for structural analyses generated to deformation shape after thermal analysis. The load-bearing capacity of the fire-damaged walls with pin supports at both ends does not significantly depend on the wall thickness, the reason for it is restraint of pinned ends. The difference of the load-bearing capacity of fire-damaged walls as axial load during the fire is within approximately 5 %.Keywords: normal-strength concrete wall, wall thickness, axial-load ratio, slenderness ratio, fire test, residual strength, finite element analysis
Procedia PDF Downloads 2153144 Experiments to Study the Vapor Bubble Dynamics in Nucleate Pool Boiling
Authors: Parul Goel, Jyeshtharaj B. Joshi, Arun K. Nayak
Abstract:
Nucleate boiling is characterized by the nucleation, growth and departure of the tiny individual vapor bubbles that originate in the cavities or imperfections present in the heating surface. It finds a wide range of applications, e.g. in heat exchangers or steam generators, core cooling in power reactors or rockets, cooling of electronic circuits, owing to its highly efficient transfer of large amount of heat flux over small temperature differences. Hence, it is important to be able to predict the rate of heat transfer and the safety limit heat flux (critical heat flux, heat flux higher than this can lead to damage of the heating surface) applicable for any given system. A large number of experimental and analytical works exist in the literature, and are based on the idea that the knowledge of the bubble dynamics on the microscopic scale can lead to the understanding of the full picture of the boiling heat transfer. However, the existing data in the literature are scattered over various sets of conditions and often in disagreement with each other. The correlations obtained from such data are also limited to the range of conditions they were established for and no single correlation is applicable over a wide range of parameters. More recently, a number of researchers have been trying to remove empiricism in the heat transfer models to arrive at more phenomenological models using extensive numerical simulations; these models require state-of-the-art experimental data for a wide range of conditions, first for input and later, for their validation. With this idea in mind, experiments with sub-cooled and saturated demineralized water have been carried out under atmospheric pressure to study the bubble dynamics- growth rate, departure size and frequencies for nucleate pool boiling. A number of heating elements have been used to study the dependence of vapor bubble dynamics on the heater surface finish and heater geometry along with the experimental conditions like the degree of sub-cooling, super heat and the heat flux. An attempt has been made to compare the data obtained with the existing data and the correlations in the literature to generate an exhaustive database for the pool boiling conditions.Keywords: experiment, boiling, bubbles, bubble dynamics, pool boiling
Procedia PDF Downloads 3023143 Social Networks in a Communication Strategy of a Large Company
Authors: Kherbache Mehdi
Abstract:
Within the framework of the validation of the Master in business administration marketing and sales in INSIM institute international in management Blida, we get the opportunity to do a professional internship in Sonelgaz Enterprise and a thesis. The thesis deals with the integration of social networking in the communication strategy of a company. The problematic is: How communicate with social network can be a solution for companies? The challenges stressed by this thesis were to suggest limits and recommendations to Sonelgaz Enterprise concerning social networks. The whole social networks represent more than a billion people as a potential target for the companies. Thanks to research and a qualitative approach, we have identified tree valid hypothesis. The first hypothesis allows confirming that using social networks cannot be ignored by any company in its communication strategy. However, the second hypothesis demonstrates that it’s necessary to prepare a strategy that integrates social networks in the communication plan of the company. The risk of this strategy is very limited because failure on social networks is not a restraint for the enterprise, social networking is not expensive and, a bad image which could result from it is not as important in the long-term. Furthermore, the return on investment is difficult to evaluate. Finally, the last hypothesis shows that firms establish a new relation between consumers and brands thanks to the proximity allowed by social networks. After the validation of the hypothesis, we suggested some recommendations to Sonelgaz Enterprise regarding the communication through social networks. Firstly, the company must use the interactivity of social network in order to have fruitful exchanges with the community. We also recommended having a strategy to treat negative comments. The company must also suggest delivering resources to the community thanks to a community manager, in order to have a good relation with the community. Furthermore, we advised using social networks to do business intelligence. Sonelgaz Enterprise can have some creative and interactive contents with some amazing applications on Facebook for example. Finally, we recommended to the company to be not intrusive with “fans” or “followers” and to be open to all the platforms: Twitter, Facebook, Linked-In for example.Keywords: social network, buzz, communication, consumer, return on investment, internet users, web 2.0, Facebook, Twitter, interaction
Procedia PDF Downloads 4223142 Estimating Water Balance at Beterou Watershed, Benin Using Soil and Water Assessment Tool (SWAT) Model
Authors: Ella Sèdé Maforikan
Abstract:
Sustained water management requires quantitative information and the knowledge of spatiotemporal dynamics of hydrological system within the basin. This can be achieved through the research. Several studies have investigated both surface water and groundwater in Beterou catchment. However, there are few published papers on the application of the SWAT modeling in Beterou catchment. The objective of this study was to evaluate the performance of SWAT to simulate the water balance within the watershed. The inputs data consist of digital elevation model, land use maps, soil map, climatic data and discharge records. The model was calibrated and validated using the Sequential Uncertainty Fitting (SUFI2) approach. The calibrated started from 1989 to 2006 with four years warming up period (1985-1988); and validation was from 2007 to 2020. The goodness of the model was assessed using five indices, i.e., Nash–Sutcliffe efficiency (NSE), the ratio of the root means square error to the standard deviation of measured data (RSR), percent bias (PBIAS), the coefficient of determination (R²), and Kling Gupta efficiency (KGE). Results showed that SWAT model successfully simulated river flow in Beterou catchment with NSE = 0.79, R2 = 0.80 and KGE= 0.83 for the calibration process against validation process that provides NSE = 0.78, R2 = 0.78 and KGE= 0.85 using site-based streamflow data. The relative error (PBIAS) ranges from -12.2% to 3.1%. The parameters runoff curve number (CN2), Moist Bulk Density (SOL_BD), Base Flow Alpha Factor (ALPHA_BF), and the available water capacity of the soil layer (SOL_AWC) were the most sensitive parameter. The study provides further research with uncertainty analysis and recommendations for model improvement and provision of an efficient means to improve rainfall and discharges measurement data.Keywords: watershed, water balance, SWAT modeling, Beterou
Procedia PDF Downloads 553141 In Silico Exploration of Quinazoline Derivatives as EGFR Inhibitors for Lung Cancer: A Multi-Modal Approach Integrating QSAR-3D, ADMET, Molecular Docking, and Molecular Dynamics Analyses
Authors: Mohamed Moussaoui
Abstract:
A series of thirty-one potential inhibitors targeting the epidermal growth factor receptor kinase (EGFR), derived from quinazoline, underwent 3D-QSAR analysis using CoMFA and CoMSIA methodologies. The training and test sets of quinazoline derivatives were utilized to construct and validate the QSAR models, respectively, with dataset alignment performed using the lowest energy conformer of the most active compound. The best-performing CoMFA and CoMSIA models demonstrated impressive determination coefficients, with R² values of 0.981 and 0.978, respectively, and Leave One Out cross-validation determination coefficients, Q², of 0.645 and 0.729, respectively. Furthermore, external validation using a test set of five compounds yielded predicted determination coefficients, R² test, of 0.929 and 0.909 for CoMFA and CoMSIA, respectively. Building upon these promising results, eighteen new compounds were designed and assessed for drug likeness and ADMET properties through in silico methods. Additionally, molecular docking studies were conducted to elucidate the binding interactions between the selected compounds and the enzyme. Detailed molecular dynamics simulations were performed to analyze the stability, conformational changes, and binding interactions of the quinazoline derivatives with the EGFR kinase. These simulations provided deeper insights into the dynamic behavior of the compounds within the active site. This comprehensive analysis enhances the understanding of quinazoline derivatives as potential anti-cancer agents and provides valuable insights for lead optimization in the early stages of drug discovery, particularly for developing highly potent anticancer therapeuticsKeywords: 3D-QSAR, CoMFA, CoMSIA, ADMET, molecular docking, quinazoline, molecular dynamic, egfr inhibitors, lung cancer, anticancer
Procedia PDF Downloads 483140 Hybrid Rocket Motor Performance Parameters: Theoretical and Experimental Evaluation
Authors: A. El-S. Makled, M. K. Al-Tamimi
Abstract:
A mathematical model to predict the performance parameters (thrusts, chamber pressures, fuel mass flow rates, mixture ratios, and regression rates during firing time) of hybrid rocket motor (HRM) is evaluated. The internal ballistic (IB) hybrid combustion model assumes that the solid fuel surface regression rate is controlled only by heat transfer (convective and radiative) from flame zone to solid fuel burning surface. A laboratory HRM is designed, manufactured, and tested for low thrust profile space missions (10-15 N) and for validating the mathematical model (computer program). The polymer material and gaseous oxidizer which are selected for this experimental work are polymethyle-methacrylate (PMMA) and polyethylene (PE) as solid fuel grain and gaseous oxygen (GO2) as oxidizer. The variation of various operational parameters with time is determined systematically and experimentally in firing of up to 20 seconds, and an average combustion efficiency of 95% of theory is achieved, which was the goal of these experiments. The comparison between recording fire data and predicting analytical parameters shows good agreement with the error that does not exceed 4.5% during all firing time. The current mathematical (computer) code can be used as a powerful tool for HRM analytical design parameters.Keywords: hybrid combustion, internal ballistics, hybrid rocket motor, performance parameters
Procedia PDF Downloads 3113139 Aflatoxins Characterization in Remedial Plant-Delphinium denudatum by High-Performance Liquid Chromatography–Tandem Mass Spectrometry
Authors: Nadeem A. Siddique, Mohd Mujeeb, Kahkashan
Abstract:
Introduction: The objective of the projected work is to study the occurrence of the aflatoxins B1, B2, G1and G2 in remedial plants, exclusively in Delphinium denudatum. The aflatoxins were analysed by high-performance liquid chromatography–tandem quadrupole mass spectrometry with electrospray ionization (HPLC–MS/MS) and immunoaffinity column chromatography were used for extraction and purification of aflatoxins. PDA media was selected for fungal count. Results: A good quality linear relationship was originated for AFB1, AFB2, AFG1 and AFG2 at 1–10 ppb (r > 0.9995). The analyte precision at three different spiking levels was 88.7–109.1 %, by means of low per cent relative standard deviations in each case. Within 5 to7 min aflatoxins can be separated using an Agilent XDB C18-column. We found that AFB1 and AFB2 were not found in D. denudatum. This was reliable through exceptionally low figures of fungal colonies observed after 6 hr of incubation. The developed analytical method is straightforward, be successfully used to determine the aflatoxins. Conclusion: The developed analytical method is straightforward, simple, accurate, economical and can be successfully used to find out the aflatoxins in remedial plants and consequently to have power over the quality of products. The presence of aflatoxin in the plant extracts was interrelated to the least fungal load in the remedial plants examined.Keywords: aflatoxins, delphinium denudatum, liquid chromatography, mass spectrometry
Procedia PDF Downloads 2133138 Calculation of the Thermal Stresses in an Elastoplastic Plate Heated by Local Heat Source
Authors: M. Khaing, A. V. Tkacheva
Abstract:
The work is devoted to solving the problem of temperature stresses, caused by the heating point of the round plate. The plate is made of elastoplastic material, so the Prandtl-Reis model is used. A piecewise-linear condition of the Ishlinsky-Ivlev flow is taken as the loading surface, in which the yield stress depends on the temperature. Piecewise-linear conditions (Treska or Ishlinsky-Ivlev), in contrast to the Mises condition, make it possible to obtain solutions of the equilibrium equation in an analytical form. In the problem under consideration, using the conditions of Tresca, it is impossible to obtain a solution. This is due to the fact that the equation of equilibrium ceases to be satisfied when the two Tresca conditions are fulfilled at once. Using the conditions of plastic flow Ishlinsky-Ivlev allows one to solve the problem. At the same time, there are also no solutions on the edge of the Ishlinsky-Ivlev hexagon in the plane-stressed state. Therefore, the authors of the article propose to jump from the edge to the edge of the mine edge, which gives an opportunity to obtain an analytical solution. At the same time, there is also no solution on the edge of the Ishlinsky-Ivlev hexagon in a plane stressed state; therefore, in this paper, the authors of the article propose to jump from the side to the side of the mine edge, which gives an opportunity to receive an analytical solution. The paper compares solutions of the problem of plate thermal deformation. One of the solutions was obtained under the condition that the elastic moduli (Young's modulus, Poisson's ratio) which depend on temperature. The yield point is assumed to be parabolically temperature dependent. The main results of the comparisons are that the region of irreversible deformation is larger in the calculations obtained for solving the problem with constant elastic moduli. There is no repeated plastic flow in the solution of the problem with elastic moduli depending on temperature. The absolute value of the irreversible deformations is higher for the solution of the problem in which the elastic moduli are constant; there are also insignificant differences in the distribution of the residual stresses.Keywords: temperature stresses, elasticity, plasticity, Ishlinsky-Ivlev condition, plate, annular heating, elastic moduli
Procedia PDF Downloads 1423137 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate
Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim
Abstract:
Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic
Procedia PDF Downloads 6393136 Validation of the Arabic Version of the InterSePT Scale for Suicidal Thinking (ISST) among the Arab Population in Qatar
Authors: S. Hammoudeh, S. Ghuloum, A. Abdelhakam, A. AlMujalli, M. Opler, Y. Hani, A. Yehya, S. Mari, R. Elsherbiny, Z. Mahfoud, H. Al-Amin
Abstract:
Introduction: Suicidal ideation and attempts are very common in patients with schizophrenia and still contributes to the high mortality in this population. The InterSePT Scale for Suicidal Thinking (ISST) is a validated tool used to assess suicidal ideation in patients with schizophrenia. This research aims to validate the Arabic version of the ISST among the Arabs residing in Qatar. Methods: Patients diagnosed with schizophrenia were recruited from the department of Psychiatry, Rumailah Hospital, Doha, Qatar. Healthy controls were recruited from the primary health care centers in Doha, Qatar. The validation procedures including professional and expert translation, pilot survey and back translation of the ISST were implemented. Diagnosis of schizophrenia was confirmed using the validated Arabic version of Mini International Neuropsychiatric Interview (MINI 6, module K) for schizophrenia. The gold standard was the module B on suicidality from MINI 6 also. This module was administered by a rater who was blinded to the results of ISST. Results: Our sample (n=199) was composed of 98 patients diagnosed with schizophrenia (age 36.03 ± 9.88 years; M/F is 2/1) and 101 healthy participants (age 35.01 ± 8.23 years; M/F is 1/2). Among patients with schizophrenia: 26.5% were married, 17.3% had a college degree, 28.6% were employed, 9% had committed suicide once, and 4.4% had more than 4 suicide attempts. Among the control group: 77.2% were married, 57.4% had a college degree, and 99% were employed. The mean score on the ISST was 2.36 ± 3.97 vs. 0.47 ± 1.44 for the schizophrenia and control groups, respectively. The overall Cronbach’s alpha was 0.91. Conclusions: This is the first study in the Arab world to validate the ISST in an Arabic-based population. The psychometric properties indicate that the Arabic version of the ISST is a valid tool to assess the severity of suicidal ideation in Arabic speaking patients diagnosed with schizophrenia.Keywords: mental health, Qatar, schizophrenia, suicide
Procedia PDF Downloads 5623135 Analytical and Numerical Investigation of Friction-Restricted Growth and Buckling of Elastic Fibers
Authors: Peter L. Varkonyi, Andras A. Sipos
Abstract:
The quasi-static growth of elastic fibers is studied in the presence of distributed contact with an immobile surface, subject to isotropic dry or viscous friction. Unlike classical problems of elastic stability modelled by autonomous dynamical systems with multiple time scales (slowly varying bifurcation parameter, and fast system dynamics), this problem can only be formulated as a non-autonomous system without time scale separation. It is found that the fibers initially converge to a trivial, straight configuration, which is later replaced by divergence reminiscent of buckling phenomena. In order to capture the loss of stability, a new definition of exponential stability against infinitesimal perturbations for systems defined over finite time intervals is developed. A semi-analytical method for the determination of the critical length based on eigenvalue analysis is proposed. The post-critical behavior of the fibers is studied numerically by using variational methods. The emerging post-critical shapes and the asymptotic behavior as length goes to infinity are identified for simple spatial distributions of growth. Comparison with physical experiments indicates reasonable accuracy of the theoretical model. Some applications from modeling plant root growth to the design of soft manipulators in robotics are briefly discussed.Keywords: buckling, elastica, friction, growth
Procedia PDF Downloads 1903134 A Higher Order Shear and Normal Deformation Theory for Functionally Graded Sandwich Beam
Authors: R. Bennai, H. Ait Atmane, Jr., A. Tounsi
Abstract:
In this work, a new analytical approach using a refined theory of hyperbolic shear deformation of a beam was developed to study the free vibration of graduated sandwiches beams under different boundary conditions. The effects of transverse shear strains and the transverse normal deformation are considered. The constituent materials of the beam are supposed gradually variable depending the height direction based on a simple power distribution law in terms of the volume fractions of the constituents; the two materials with which we worked are metals and ceramics. The core layer is taken homogeneous and made of an isotropic material; while the banks layers consist of FGM materials with a homogeneous fraction compared to the middle layer. Movement equations are obtained by the energy minimization principle. Analytical solutions of free vibration and buckling are obtained for sandwich beams under different support conditions; these conditions are taken into account by incorporating new form functions. In the end, illustrative examples are presented to show the effects of changes in different parameters such as (material graduation, the stretching effect of the thickness, boundary conditions and thickness ratio - length) on the vibration free and buckling of an FGM sandwich beams.Keywords: functionally graded sandwich beam, refined shear deformation theory, stretching effect, free vibration
Procedia PDF Downloads 2463133 Heat Transfer and Entropy Generation in a Partial Porous Channel Using LTNE and Exothermicity/Endothermicity Features
Authors: Mohsen Torabi, Nader Karimi, Kaili Zhang
Abstract:
This work aims to provide a comprehensive study on the heat transfer and entropy generation rates of a horizontal channel partially filled with a porous medium which experiences internal heat generation or consumption due to exothermic or endothermic chemical reaction. The focus has been given to the local thermal non-equilibrium (LTNE) model. The LTNE approach helps us to deliver more accurate data regarding temperature distribution within the system and accordingly to provide more accurate Nusselt number and entropy generation rates. Darcy-Brinkman model is used for the momentum equations, and constant heat flux is assumed for boundary conditions for both upper and lower surfaces. Analytical solutions have been provided for both velocity and temperature fields. By incorporating the investigated velocity and temperature formulas into the provided fundamental equations for the entropy generation, both local and total entropy generation rates are plotted for a number of cases. Bifurcation phenomena regarding temperature distribution and interface heat flux ratio are observed. It has been found that the exothermicity or endothermicity characteristic of the channel does have a considerable impact on the temperature fields and entropy generation rates.Keywords: entropy generation, exothermicity or endothermicity, forced convection, local thermal non-equilibrium, analytical modelling
Procedia PDF Downloads 4153132 Effects of Changes in LULC on Hydrological Response in Upper Indus Basin
Authors: Ahmad Ammar, Umar Khan Khattak, Muhammad Majid
Abstract:
Empirically based lumped hydrologic models have an extensive track record of use for various watershed managements and flood related studies. This study focuses on the impacts of LULC change for 10 year period on the discharge in watershed using lumped model HEC-HMS. The Indus above Tarbela region acts as a source of the main flood events in the middle and lower portions of Indus because of the amount of rainfall and topographic setting of the region. The discharge pattern of the region is influenced by the LULC associated with it. In this study the Landsat TM images were used to do LULC analysis of the watershed. Satellite daily precipitation TRMM data was used as input rainfall. The input variables for model building in HEC-HMS were then calculated based on the GIS data collected and pre-processed in HEC-GeoHMS. SCS-CN was used as transform model, SCS unit hydrograph method was used as loss model and Muskingum was used as routing model. For discharge simulation years 2000 and 2010 were taken. HEC-HMS was calibrated for the year 2000 and then validated for 2010.The performance of the model was assessed through calibration and validation process and resulted R2=0.92 during calibration and validation. Relative Bias for the years 2000 was -9% and for2010 was -14%. The result shows that in 10 years the impact of LULC change on discharge has been negligible in the study area overall. One reason is that, the proportion of built-up area in the watershed, which is the main causative factor of change in discharge, is less than 1% of the total area. However, locally, the impact of development was found significant in built up area of Mansehra city. The analysis was done on Mansehra city sub-watershed with an area of about 16 km2 and has more than 13% built up area in 2010. The results showed that with an increase of 40% built-up area in the city from 2000 to 2010 the discharge values increased about 33 percent, indicating the impact of LULC change on discharge value.Keywords: LULC change, HEC-HMS, Indus Above Tarbela, SCS-CN
Procedia PDF Downloads 5133131 The Exercise of Deliberative Democracy on Public Administrations Agencies' Decisions
Authors: Mauricio Filho, Carina Castro
Abstract:
The object of this project is to analyze long-time public agents that passed through several governments and see themselves in the position of having to deliberate with new agents, recently settled in the public administration. For theoretical ends, internal deliberation is understood as the one practiced on the public administration agencies, without any direct participation from the general public in the process. The assumption is: agents with longer periods of public service tend to step away from momentary political discussions that guide the current administration and seek to concentrate on institutionalized routines and procedures, making the most politically aligned individuals with the current government deliberate with less "passion" and more exchanging of knowledge and information. The theoretical framework of this research is institutionalism, which is guided by a more pragmatic view, facing the fluidity of reality in ways showing the multiple relations between agents and their respective institutions. The critical aspirations of this project rest on the works of professors Cass Sunstein, Adrian Vermeule, Philipp Pettit and in literature from both institutional theory and economic analysis of law, greatly influenced by the Chicago Law School. Methodologically, the paper is a theoretical review and pretends to be unfolded, in a future moment, in empirical tests for verification. This work has as its main analytical tool the appeal to theoretical and doctrinaire areas from the Juridical Sciences, by adopting the deductive and analytical method.Keywords: institutions, state, law, agencies
Procedia PDF Downloads 2653130 Statistical Correlation between Logging-While-Drilling Measurements and Wireline Caliper Logs
Authors: Rima T. Alfaraj, Murtadha J. Al Tammar, Khaqan Khan, Khalid M. Alruwaili
Abstract:
OBJECTIVE/SCOPE (25-75): Caliper logging data provides critical information about wellbore shape and deformations, such as stress-induced borehole breakouts or washouts. Multiarm mechanical caliper logs are often run using wireline, which can be time-consuming, costly, and/or challenging to run in certain formations. To minimize rig time and improve operational safety, it is valuable to develop analytical solutions that can estimate caliper logs using available Logging-While-Drilling (LWD) data without the need to run wireline caliper logs. As a first step, the objective of this paper is to perform statistical analysis using an extensive datasetto identify important physical parameters that should be considered in developing such analytical solutions. METHODS, PROCEDURES, PROCESS (75-100): Caliper logs and LWD data of eleven wells, with a total of more than 80,000 data points, were obtained and imported into a data analytics software for analysis. Several parameters were selected to test the relationship of the parameters with the measured maximum and minimum caliper logs. These parameters includegamma ray, porosity, shear, and compressional sonic velocities, bulk densities, and azimuthal density. The data of the eleven wells were first visualized and cleaned.Using the analytics software, several analyses were then preformed, including the computation of Pearson’s correlation coefficients to show the statistical relationship between the selected parameters and the caliper logs. RESULTS, OBSERVATIONS, CONCLUSIONS (100-200): The results of this statistical analysis showed that some parameters show good correlation to the caliper log data. For instance, the bulk density and azimuthal directional densities showedPearson’s correlation coefficients in the range of 0.39 and 0.57, which wererelatively high when comparedto the correlation coefficients of caliper data with other parameters. Other parameters such as porosity exhibited extremely low correlation coefficients to the caliper data. Various crossplots and visualizations of the data were also demonstrated to gain further insights from the field data. NOVEL/ADDITIVE INFORMATION (25-75): This study offers a unique and novel look into the relative importance and correlation between different LWD measurements and wireline caliper logs via an extensive dataset. The results pave the way for a more informed development of new analytical solutions for estimating the size and shape of the wellbore in real-time while drilling using LWD data.Keywords: LWD measurements, caliper log, correlations, analysis
Procedia PDF Downloads 1213129 Machine Learning Algorithms for Rocket Propulsion
Authors: Rômulo Eustáquio Martins de Souza, Paulo Alexandre Rodrigues de Vasconcelos Figueiredo
Abstract:
In recent years, there has been a surge in interest in applying artificial intelligence techniques, particularly machine learning algorithms. Machine learning is a data-analysis technique that automates the creation of analytical models, making it especially useful for designing complex situations. As a result, this technology aids in reducing human intervention while producing accurate results. This methodology is also extensively used in aerospace engineering since this is a field that encompasses several high-complexity operations, such as rocket propulsion. Rocket propulsion is a high-risk operation in which engine failure could result in the loss of life. As a result, it is critical to use computational methods capable of precisely representing the spacecraft's analytical model to guarantee its security and operation. Thus, this paper describes the use of machine learning algorithms for rocket propulsion to aid the realization that this technique is an efficient way to deal with challenging and restrictive aerospace engineering activities. The paper focuses on three machine-learning-aided rocket propulsion applications: set-point control of an expander-bleed rocket engine, supersonic retro-propulsion of a small-scale rocket, and leak detection and isolation on rocket engine data. This paper describes the data-driven methods used for each implementation in depth and presents the obtained results.Keywords: data analysis, modeling, machine learning, aerospace, rocket propulsion
Procedia PDF Downloads 1153128 Passive Aeration of Wastewater: Analytical Model
Authors: Ayman M. El-Zahaby, Ahmed S. El-Gendy
Abstract:
Aeration for wastewater is essential for the proper operation of aerobic treatment units where the wastewater normally has zero dissolved oxygen. This is due to the need of oxygen by the aerobic microorganisms to grow and survive. Typical aeration units for wastewater treatment require electric energy for their operation such as mechanical aerators or diffused aerators. The passive units are units that operate without the need of electric energy such as cascade aerators, spray aerators and tray aerators. In contrary to the cascade aerators and spray aerators, tray aerators require much smaller area foot print for their installation as the treatment stages are arranged vertically. To the extent of the authors knowledge, the design of tray aerators for the aeration purpose has not been presented in the literature. The current research concerns with an analytical study for the design of tray aerators for the purpose of increasing the dissolved oxygen in wastewater treatment systems, including an investigation on different design parameters and their impact on the aeration efficiency. The studied aerator shall act as an intermediate stage between an anaerobic primary treatment unit and an aerobic treatment unit for small scale treatment systems. Different free falling flow regimes were investigated, and the thresholds for transition between regimes were obtained from the literature. The study focused on the jetting flow regime between trays. Starting from the two film theory, an equation that relates the dissolved oxygen concentration effluent from the system was derived as a function of the flow rate, number of trays, tray area, spacing between trays, number and diameter of holes and the water temperature. A MATLab ® model was developed for the derived equation. The expected aeration efficiency under different tray configurations and operating conditions were illustrated through running the model with varying the design parameters. The impact of each parameter was illustrated. The overall system efficiency was found to increase by decreasing the hole diameter. On the other side, increasing the number of trays, tray area, flow rate per hole or tray spacing had positive effect on the system efficiency.Keywords: aeration, analytical, passive, wastewater
Procedia PDF Downloads 2093127 Structural Performance of Composite Steel and Concrete Beams
Authors: Jakub Bartus
Abstract:
In general, composite steel and concrete structures present an effective structural solution utilizing full potential of both materials. As they have a numerous advantages on the construction side, they can reduce greatly the overall cost of construction, which is the main objective of the last decade, highlighted by the current economic and social crisis. The study represents not only an analysis of composite beams’ behaviour having web openings but emphasizes the influence of these openings on the total strain distribution at the level of steel bottom flange as well. The major investigation was focused on a change of structural performance with respect to various layouts of openings. Examining this structural modification, an improvement of load carrying capacity of composite beams was a prime object. The study is devided into analytical and numerical part. The analytical part served as an initial step into the design process of composite beam samples, in which optimal dimensions and specific levels of utilization in individual stress states were taken into account. The numerical part covered description of imposed structural issue in a form of a finite element model (FEM) using strut and shell elements accounting for material non-linearities. As an outcome, a number of conclusions were drawn describing and explaining an effect of web opening presence on the structural performance of composite beams.Keywords: composite beam, web opening, steel flange, totalstrain, finite element analysis
Procedia PDF Downloads 693126 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process
Abstract:
Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process
Procedia PDF Downloads 1473125 Designing Automated Embedded Assessment to Assess Student Learning in a 3D Educational Video Game
Authors: Mehmet Oren, Susan Pedersen, Sevket C. Cetin
Abstract:
Despite the frequently criticized disadvantages of the traditional used paper and pencil assessment, it is the most frequently used method in our schools. Although assessments do an acceptable measurement, they are not capable of measuring all the aspects and the richness of learning and knowledge. Also, many assessments used in schools decontextualize the assessment from the learning, and they focus on learners’ standing on a particular topic but do not concentrate on how student learning changes over time. For these reasons, many scholars advocate that using simulations and games (S&G) as a tool for assessment has significant potentials to overcome the problems in traditionally used methods. S&G can benefit from the change in technology and provide a contextualized medium for assessment and teaching. Furthermore, S&G can serve as an instructional tool rather than a method to test students’ learning at a particular time point. To investigate the potentials of using educational games as an assessment and teaching tool, this study presents the implementation and the validation of an automated embedded assessment (AEA), which can constantly monitor student learning in the game and assess their performance without intervening their learning. The experiment was conducted on an undergraduate level engineering course (Digital Circuit Design) with 99 participant students over a period of five weeks in Spring 2016 school semester. The purpose of this research study is to examine if the proposed method of AEA is valid to assess student learning in a 3D Educational game and present the implementation steps. To address this question, this study inspects three aspects of the AEA for the validation. First, the evidence-centered design model was used to lay out the design and measurement steps of the assessment. Then, a confirmatory factor analysis was conducted to test if the assessment can measure the targeted latent constructs. Finally, the scores of the assessment were compared with an external measure (a validated test measuring student learning on digital circuit design) to evaluate the convergent validity of the assessment. The results of the confirmatory factor analysis showed that the fit of the model with three latent factors with one higher order factor was acceptable (RMSEA < 0.00, CFI =1, TLI=1.013, WRMR=0.390). All of the observed variables significantly loaded to the latent factors in the latent factor model. In the second analysis, a multiple regression analysis was used to test if the external measure significantly predicts students’ performance in the game. The results of the regression indicated the two predictors explained 36.3% of the variance (R2=.36, F(2,96)=27.42.56, p<.00). It was found that students’ posttest scores significantly predicted game performance (β = .60, p < .000). The statistical results of the analyses show that the AEA can distinctly measure three major components of the digital circuit design course. It was aimed that this study can help researchers understand how to design an AEA, and showcase an implementation by providing an example methodology to validate this type of assessment.Keywords: educational video games, automated embedded assessment, assessment validation, game-based assessment, assessment design
Procedia PDF Downloads 4213124 Evaluation of Oxidative Changes in Soybean Oil During Shelf-Life by Physico-Chemical Methods and Headspace-Liquid Phase Microextraction (HS-LPME) Technique
Authors: Maryam Enteshari, Kooshan Nayebzadeh, Abdorreza Mohammadi
Abstract:
In this study, the oxidative stability of soybean oil under different storage temperatures (4 and 25˚C) and during 6-month shelf-life was investigated by various analytical methods and headspace-liquid phase microextraction (HS-LPME) coupled to gas chromatography-mass spectrometry (GC-MS). Oxidation changes were monitored by analytical parameters consisted of acid value (AV), peroxide value (PV), p-Anisidine value (p-AV), thiobarbituric acid value (TBA), fatty acids profile, iodine value (IV), and oxidative stability index (OSI). In addition, concentrations of hexanal and heptanal as secondary volatile oxidation compounds were determined by HS-LPME/GC-MS technique. Rate of oxidation in soybean oil which stored at 25˚C was so higher. The AV, p-AV, and TBA were gradually increased during 6 months while the amount of unsaturated fatty acids, IV, and OSI decreased. Other parameters included concentrations of both hexanal and heptanal, and PV exhibited increasing trend during primitive months of storage; then, at the end of third and fourth months a sudden decrement was understood for the concentrations of hexanal and heptanal and the amount of PV, simultaneously. The latter parameters increased again until the end of shelf-time. As a result, the temperature and time were effective factors in oxidative stability of soybean oil. Also intensive correlations were found for soybean oil at 4 ˚C between AV and TBA (r2=0.96), PV and p-AV (r2=0.9), IV and TBA (-r2=0.9), and for soybean oil stored at 4˚C between p-AV and TBA (r2=0.99).Keywords: headspace-liquid phase microextraction, oxidation, shelf-life, soybean oil
Procedia PDF Downloads 4043123 Evaluating Generative Neural Attention Weights-Based Chatbot on Customer Support Twitter Dataset
Authors: Sinarwati Mohamad Suhaili, Naomie Salim, Mohamad Nazim Jambli
Abstract:
Sequence-to-sequence (seq2seq) models augmented with attention mechanisms are playing an increasingly important role in automated customer service. These models, which are able to recognize complex relationships between input and output sequences, are crucial for optimizing chatbot responses. Central to these mechanisms are neural attention weights that determine the focus of the model during sequence generation. Despite their widespread use, there remains a gap in the comparative analysis of different attention weighting functions within seq2seq models, particularly in the domain of chatbots using the Customer Support Twitter (CST) dataset. This study addresses this gap by evaluating four distinct attention-scoring functions—dot, multiplicative/general, additive, and an extended multiplicative function with a tanh activation parameter — in neural generative seq2seq models. Utilizing the CST dataset, these models were trained and evaluated over 10 epochs with the AdamW optimizer. Evaluation criteria included validation loss and BLEU scores implemented under both greedy and beam search strategies with a beam size of k=3. Results indicate that the model with the tanh-augmented multiplicative function significantly outperforms its counterparts, achieving the lowest validation loss (1.136484) and the highest BLEU scores (0.438926 under greedy search, 0.443000 under beam search, k=3). These results emphasize the crucial influence of selecting an appropriate attention-scoring function in improving the performance of seq2seq models for chatbots. Particularly, the model that integrates tanh activation proves to be a promising approach to improve the quality of chatbots in the customer support context.Keywords: attention weight, chatbot, encoder-decoder, neural generative attention, score function, sequence-to-sequence
Procedia PDF Downloads 783122 Analysing the Permanent Deformation of Cohesive Subsoil Subject to Long Term Cyclic Train Loading
Authors: Natalie M. Wride, Xueyu Geng
Abstract:
Subgrade soils of railway infrastructure are subjected to a significant number of load applications over their design life. The use of slab track on existing and future proposed rail links requires a reduced maintenance and repair regime for the embankment subgrade, due to restricted access to the subgrade soils for remediation caused by cyclic deformation. It is, therefore, important to study the deformation behaviour of soft cohesive subsoils induced as a result of long term cyclic loading. In this study, a series of oedometer tests and cyclic triaxial tests (10,000 cycles) have been undertaken to investigate the undrained deformation behaviour of soft kaolin. X-ray Computer Tomography (CT) scanning of the samples has been performed to determine the change in porosity and soil structure density from the sample microstructure as a result of the laboratory testing regime undertaken. Combined with the examination of excess pore pressures and strains obtained from the cyclic triaxial tests, the results are compared with an existing analytical solution for long term settlement considering repeated low amplitude loading. Modifications to the analytical solution are presented based on the laboratory analysis that shows good agreement with further test data.Keywords: creep, cyclic loading, deformation, long term settlement, train loading
Procedia PDF Downloads 2993121 Effect of Concrete Strength and Aspect Ratio on Strength and Ductility of Concrete Columns
Authors: Mohamed A. Shanan, Ashraf H. El-Zanaty, Kamal G. Metwally
Abstract:
This paper presents the effect of concrete compressive strength and rectangularity ratio on strength and ductility of normal and high strength reinforced concrete columns confined with transverse steel under axial compressive loading. Nineteen normal strength concrete rectangular columns with different variables tested in this research were used to study the effect of concrete compressive strength and rectangularity ratio on strength and ductility of columns. The paper also presents a nonlinear finite element analysis for these specimens and another twenty high strength concrete square columns tested by other researchers using ANSYS 15 finite element software. The results indicate that the axial force – axial strain relationship obtained from the analytical model using ANSYS are in good agreement with the experimental data. The comparison shows that the ANSYS is capable of modeling and predicting the actual nonlinear behavior of confined normal and high-strength concrete columns under concentric loading. The maximum applied load and the maximum strain have also been confirmed to be satisfactory. Depending on this agreement between the experimental and analytical results, a parametric numerical study was conducted by ANSYS 15 to clarify and evaluate the effect of each variable on strength and ductility of the columns.Keywords: ANSYS, concrete compressive strength effect, ductility, rectangularity ratio, strength
Procedia PDF Downloads 5103120 On Influence of Web Openings Presence on Structural Performance of Steel and Concrete Beams
Authors: Jakub Bartus, Jaroslav Odrobinak
Abstract:
In general, composite steel and concrete structures present an effective structural solution utilizing the full potential of both materials. As they have numerous advantages on the construction side, they can greatly reduce the overall cost of construction, which has been the main objective of the last decade, highlighted by the current economic and social crisis. The study represents not only an analysis of composite beams’ behavior having web openings but emphasizes the influence of these openings on the total strain distribution at the level of the steel bottom flange as well. The major investigation was focused on a change in structural performance with respect to various layouts of openings. Examining this structural modification, an improvement of load carrying capacity of composite beams was a prime objective. The study is divided into analytical and numerical parts. The analytical part served as an initial step into the design process of composite beam samples, in which optimal dimensions and specific levels of utilization in individual stress states were taken into account. The numerical part covered the discretization of the preset structural issue in the form of a finite element (FE) model using beam and shell elements accounting for material non–linearities. As an outcome, several conclusions were drawn describing and explaining the effect of web opening presence on the structural performance of composite beams.Keywords: beam, steel flange, total strain, web opening
Procedia PDF Downloads 773119 Agile Software Effort Estimation Using Regression Techniques
Authors: Mikiyas Adugna
Abstract:
Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.Keywords: agile software development, effort estimation, elastic net regression, LASSO
Procedia PDF Downloads 713118 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins
Authors: Mohammad R. Jalali, Mohammad M. Jalali
Abstract:
The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.Keywords: Green–Naghdi equations, nonlinearity, numerical prediction, sloshing waves, solitary waves
Procedia PDF Downloads 2863117 Mechanical Characterization of Porcine Skin with the Finite Element Method Based Inverse Optimization Approach
Authors: Djamel Remache, Serge Dos Santos, Michael Cliez, Michel Gratton, Patrick Chabrand, Jean-Marie Rossi, Jean-Louis Milan
Abstract:
Skin tissue is an inhomogeneous and anisotropic material. Uniaxial tensile testing is one of the primary testing techniques for the mechanical characterization of skin at large scales. In order to predict the mechanical behavior of materials, the direct or inverse analytical approaches are often used. However, in case of an inhomogeneous and anisotropic material as skin tissue, analytical approaches are not able to provide solutions. The numerical simulation is thus necessary. In this work, the uniaxial tensile test and the FEM (finite element method) based inverse method were used to identify the anisotropic mechanical properties of porcine skin tissue. The uniaxial tensile experiments were performed using Instron 8800 tensile machine®. The uniaxial tensile test was simulated with FEM, and then the inverse optimization approach (or the inverse calibration) was used for the identification of mechanical properties of the samples. Experimentally results were compared to finite element solutions. The results showed that the finite element model predictions of the mechanical behavior of the tested skin samples were well correlated with experimental results.Keywords: mechanical skin tissue behavior, uniaxial tensile test, finite element analysis, inverse optimization approach
Procedia PDF Downloads 4083116 Portfolio Management for Construction Company during Covid-19 Using AHP Technique
Authors: Sareh Rajabi, Salwa Bheiry
Abstract:
In general, Covid-19 created many financial and non-financial damages to the economy and community. Level and severity of covid-19 as pandemic case varies over the region and due to different types of the projects. Covid-19 virus emerged as one of the most imperative risk management factors word-wide recently. Therefore, as part of portfolio management assessment, it is essential to evaluate severity of such risk on the project and program in portfolio management level to avoid any risky portfolio. Covid-19 appeared very effectively in South America, part of Europe and Middle East. Such pandemic infection affected the whole universe, due to lock down, interruption in supply chain management, health and safety requirements, transportations and commercial impacts. Therefore, this research proposes Analytical Hierarchy Process (AHP) to analyze and assess such pandemic case like Covid-19 and its impacts on the construction projects. The AHP technique uses four sub-criteria: Health and safety, commercial risk, completion risk and contractual risk to evaluate the project and program. The result will provide the decision makers with information which project has higher or lower risk in case of Covid-19 and pandemic scenario. Therefore, the decision makers can have most feasible solution based on effective weighted criteria for project selection within their portfolio to match with the organization’s strategies.Keywords: portfolio management, risk management, COVID-19, analytical hierarchy process technique
Procedia PDF Downloads 109