Search results for: coordinate modal assurance criterion
597 Vibration Based Damage Detection and Stiffness Reduction of Bridges: Experimental Study on a Small Scale Concrete Bridge
Authors: Mirco Tarozzi, Giacomo Pignagnoli, Andrea Benedetti
Abstract:
Structural systems are often subjected to degradation processes due to different kind of phenomena like unexpected loadings, ageing of the materials and fatigue cycles. This is true especially for bridges, in which their safety evaluation is crucial for the purpose of a design of planning maintenance. This paper discusses the experimental evaluation of the stiffness reduction from frequency changes due to uniform damage scenario. For this purpose, a 1:4 scaled bridge has been built in the laboratory of the University of Bologna. It is made of concrete and its cross section is composed by a slab linked to four beams. This concrete deck is 6 m long and 3 m wide, and its natural frequencies have been identified dynamically by exciting it with an impact hammer, a dropping weight, or by walking on it randomly. After that, a set of loading cycles has been applied to this bridge in order to produce a uniformly distributed crack pattern. During the loading phase, either cracking moment and yielding moment has been reached. In order to define the relationship between frequency variation and loss in stiffness, the identification of the natural frequencies of the bridge has been performed, before and after the occurrence of the damage, corresponding to each load step. The behavior of breathing cracks and its effect on the natural frequencies has been taken into account in the analytical calculations. By using a sort of exponential function given from the study of lot of experimental tests in the literature, it has been possible to predict the stiffness reduction through the frequency variation measurements. During the load test also crack opening and middle span vertical displacement has been monitored.Keywords: concrete bridge, damage detection, dynamic test, frequency shifts, operational modal analysis
Procedia PDF Downloads 184596 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys
Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta
Abstract:
The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.Keywords: chimney, deterministic model, van der pol, vortex-induced vibration
Procedia PDF Downloads 221595 Genetic Variation among the Wild and Hatchery Raised Populations of Labeo rohita Revealed by RAPD Markers
Authors: Fayyaz Rasool, Shakeela Parveen
Abstract:
The studies on genetic diversity of Labeo rohita by using molecular markers were carried out to investigate the genetic structure by RAPAD marker and the levels of polymorphism and similarity amongst the different groups of five populations of wild and farmed types. The samples were collected from different five locations as representatives of wild and hatchery raised populations. RAPAD data for Jaccard’s coefficient by following the un-weighted Pair Group Method with Arithmetic Mean (UPGMA) for Hierarchical Clustering of the similar groups on the basis of similarity amongst the genotypes and the dendrogram generated divided the randomly selected individuals of the five populations into three classes/clusters. The variance decomposition for the optimal classification values remained as 52.11% for within class variation, while 47.89% for the between class differences. The Principal Component Analysis (PCA) for grouping of the different genotypes from the different environmental conditions was done by Spearman Varimax rotation method for bi-plot generation of the co-occurrence of the same genotypes with similar genetic properties and specificity of different primers indicated clearly that the increase in the number of factors or components was correlated with the decrease in eigenvalues. The Kaiser Criterion based upon the eigenvalues greater than one, first two main factors accounted for 58.177% of cumulative variability.Keywords: variation, clustering, PCA, wild, hatchery, RAPAD, Labeo rohita
Procedia PDF Downloads 449594 The Use of Emergency Coronary Angiography in Patients Following Out-Of-Hospital Cardiac Arrest and Subsequent Cardio-Pulmonary Resuscitation
Authors: Scott Ashby, Emily Granger, Mark Connellan
Abstract:
Objectives: 1) To identify if emergency coronary angiography improves outcomes in studies examining OHCA from assumed cardiac aetiology? 2) If so, is it indicated in all patients resuscitated following OHCA, and if not, who is it indicated for? 3) How effective are investigations for screening for the appropriate patients? Background: Out-of-hospital cardiac arrest is one of the leading mechanisms of death, and the most common causative pathology is coronary artery disease. In-hospital treatment following resuscitation greatly affects outcomes, yet there is debate over the most effective protocol. Methods: A literature search was conducted over multiple databases to identify all relevant articles published from 2005. An inclusion criterion was applied to all publications retrieved, which were then sorted by type. Results: A total of 3 existing reviews and 29 clinical studies were analysed in this review. There were conflicting conclusions, however increased use of angiography has shown to improve outcomes in the majority of studies, which cover a variety of settings and cohorts. Recommendations: Currently, emergency coronary angiography appears to improve outcomes in all/most cases of OHCA of assumed cardiac aetiology, regardless of ECG findings. Until a better tool for screening is available to reduce unnecessary procedures, the benefits appear to outweigh the costs/risks.Keywords: out of hospital cardiac arrest, coronary angiography, resuscitation, emergency medicine
Procedia PDF Downloads 299593 An Axiomatic Model for Development of the Allocated Architecture in Systems Engineering Process
Authors: Amir Sharahi, Reza Tehrani, Ali Mollajan
Abstract:
The final step to complete the “Analytical Systems Engineering Process” is the “Allocated Architecture” in which all Functional Requirements (FRs) of an engineering system must be allocated into their corresponding Physical Components (PCs). At this step, any design for developing the system’s allocated architecture in which no clear pattern of assigning the exclusive “responsibility” of each PC for fulfilling the allocated FR(s) can be found is considered a poor design that may cause difficulties in determining the specific PC(s) which has (have) failed to satisfy a given FR successfully. The present study utilizes the Axiomatic Design method principles to mathematically address this problem and establishes an “Axiomatic Model” as a solution for reaching good alternatives for developing the allocated architecture. This study proposes a “loss Function”, as a quantitative criterion to monetarily compare non-ideal designs for developing the allocated architecture and choose the one which imposes relatively lower cost to the system’s stakeholders. For the case-study, we use the existing design of U. S. electricity marketing subsystem, based on data provided by the U.S. Energy Information Administration (EIA). The result for 2012 shows the symptoms of a poor design and ineffectiveness due to coupling among the FRs of this subsystem.Keywords: allocated architecture, analytical systems engineering process, functional requirements (FRs), physical components (PCs), responsibility of a physical component, system’s stakeholders
Procedia PDF Downloads 408592 Prevalence of Menopausal Women with Clinical Symptoms of Allergy and Evaluation the Effect of Sex Hormone Combined with Anti-Allergy Treatment
Authors: Yang Wei, Xueyan Wang, Hui Zou
Abstract:
Objective: Investigation the prevalence of menopausal symptoms in patients with allergic symptoms, evaluation of the effect of sex hormones combined with anti-allergic therapy in these patients. Method: Age of 45-65 years old women with allergic symptoms at the same time in gynecological-endocrinology clinic in our hospital were selected from Feb 1 to May 31, 2010, randomly. The patients were given oral estradiol valerate plus progestin pills combined with anti-allergy treatment and then evaluated twice a week and one month later. Evaluation criterion: Menopause Rating Scale (MRS) and the degree of clinical symptoms were used to evaluate menopause and allergy separately. Results: 1) There were 195 cases of patients with menopausal symptoms at the age. Their MRS were all over 15. 2) Among them 45 patients were with allergic symptom accounted for 23% which were diagnosed by allergic department. 3) Evaluated after one week: the menopausal symptoms were improved and MRS were less than or equal to 5 in all these patients; the skin symptom of allergic symptoms vanished completely. 4) Evaluated after one month: Menopause symptoms were improved steadily; other clinical symptoms of allergy were also improved or without recurrence. Conclusion: The incidence rate of menopausal women with clinical symptoms of allergic diseases is high and it needs attention. The effect of sex hormones combined with anti-allergic therapy is obvious.Keywords: menopausal, allergy, sex hormone, anti-allergy treatment
Procedia PDF Downloads 271591 Flood Predicting in Karkheh River Basin Using Stochastic ARIMA Model
Authors: Karim Hamidi Machekposhti, Hossein Sedghi, Abdolrasoul Telvari, Hossein Babazadeh
Abstract:
Floods have huge environmental and economic impact. Therefore, flood prediction is given a lot of attention due to its importance. This study analysed the annual maximum streamflow (discharge) (AMS or AMD) of Karkheh River in Karkheh River Basin for flood predicting using ARIMA model. For this purpose, we use the Box-Jenkins approach, which contains four-stage method model identification, parameter estimation, diagnostic checking and forecasting (predicting). The main tool used in ARIMA modelling was the SAS and SPSS software. Model identification was done by visual inspection on the ACF and PACF. SAS software computed the model parameters using the ML, CLS and ULS methods. The diagnostic checking tests, AIC criterion, RACF graph and RPACF graphs, were used for selected model verification. In this study, the best ARIMA models for Annual Maximum Discharge (AMD) time series was (4,1,1) with their AIC value of 88.87. The RACF and RPACF showed residuals’ independence. To forecast AMD for 10 future years, this model showed the ability of the model to predict floods of the river under study in the Karkheh River Basin. Model accuracy was checked by comparing the predicted and observation series by using coefficient of determination (R2).Keywords: time series modelling, stochastic processes, ARIMA model, Karkheh river
Procedia PDF Downloads 287590 Free Radical Dosimetry for Ultrasound in Terephthalic Acid Solutions Containing Gold Nanoparticles
Authors: Ahmad Shanei, Mohammad Mahdi Shanei
Abstract:
When a liquid is irradiated with high intensities (> 1 W) and low frequencies (≤ 1 MHz) ultrasound, acoustic cavitation occurs. Acoustic cavitation generates free radicals from the breakdown of water and other molecules. The existence of particles in liquid provide nucleation sites for cavitation bubbles and lead to decrease the ultrasonic intensity threshold needed for cavitation onset. The study was designed to measure hydroxyl radicals in terephthalic acid solutions containing 30 nm gold nanoparticles in a near field of a 1 MHz sonotherapy probe. The effect of ultrasound irradiation parameters containing mode of sonication and ultrasound intensity in hydroxyl radicals production have been investigated by the spectrofluorometry method. Recorded fluorescence signal in terephthalic acid solution containing gold nanoparticles was higher than the terephthalic acid solution without gold nanoparticles. Also, the results showed that any increase in intensity of the sonication would be associated with an increase in the fluorescence intensity. Acoustic cavitation in the presence of gold nanoparticles has been introduced as a way for improving therapeutic effects on the tumors. Also, the terephthalic acid dosimetry is suitable for detecting and quantifying free hydroxyl radicals as a criterion of cavitation production over a range of condition in medical ultrasound fields.Keywords: acoustic cavitation, gold nanoparticle, chemical dosimetry, terephthalic acid
Procedia PDF Downloads 473589 Evaluating the Dosimetric Performance for 3D Treatment Planning System for Wedged and Off-Axis Fields
Authors: Nashaat A. Deiab, Aida Radwan, Mohamed S. Yahiya, Mohamed Elnagdy, Rasha Moustafa
Abstract:
This study is to evaluate the dosimetric performance of our institution's 3D treatment planning system for wedged and off-axis 6MV photon beams, guided by the recommended QA tests documented in the AAPM TG53; NCS report 15 test packages, IAEA TRS 430 and ESTRO booklet no.7. The study was performed for Elekta Precise linear accelerator designed for clinical range of 4, 6 and 15 MV photon beams with asymmetric jaws and fully integrated multileaf collimator that enables high conformance to target with sharp field edges. Ten tests were applied on solid water equivalent phantom along with 2D array dose detection system. The calculated doses using 3D treatment planning system PrecisePLAN were compared with measured doses to make sure that the dose calculations are accurate for simple situations such as square and elongated fields, different SSD, beam modifiers e.g. wedges, blocks, MLC-shaped fields and asymmetric collimator settings. The QA results showed dosimetric accuracy of the TPS within the specified tolerance limits. Except for large elongated wedged field, the central axis and outside central axis have errors of 0.2% and 0.5%, respectively, and off- planned and off-axis elongated fields the region outside the central axis of the beam errors are 0.2% and 1.1%, respectively. The dosimetric investigated results yielded differences within the accepted tolerance level as recommended. Differences between dose values predicted by the TPS and measured values at the same point are the result from limitations of the dose calculation, uncertainties in the measurement procedure, or fluctuations in the output of the accelerator.Keywords: quality assurance, dose calculation, wedged fields, off-axis fields, 3D treatment planning system, photon beam
Procedia PDF Downloads 445588 Quality Assurance for the Climate Data Store
Authors: Judith Klostermann, Miguel Segura, Wilma Jans, Dragana Bojovic, Isadora Christel Jimenez, Francisco Doblas-Reyees, Judit Snethlage
Abstract:
The Climate Data Store (CDS), developed by the Copernicus Climate Change Service (C3S) implemented by the European Centre for Medium-Range Weather Forecasts (ECMWF) on behalf of the European Union, is intended to become a key instrument for exploring climate data. The CDS contains both raw and processed data to provide information to the users about the past, present and future climate of the earth. It allows for easy and free access to climate data and indicators, presenting an important asset for scientists and stakeholders on the path for achieving a more sustainable future. The C3S Evaluation and Quality Control (EQC) is assessing the quality of the CDS by undertaking a comprehensive user requirement assessment to measure the users’ satisfaction. Recommendations will be developed for the improvement and expansion of the CDS datasets and products. User requirements will be identified on the fitness of the datasets, the toolbox, and the overall CDS service. The EQC function of the CDS will help C3S to make the service more robust: integrated by validated data that follows high-quality standards while being user-friendly. This function will be closely developed with the users of the service. Through their feedback, suggestions, and contributions, the CDS can become more accessible and meet the requirements for a diverse range of users. Stakeholders and their active engagement are thus an important aspect of CDS development. This will be achieved with direct interactions with users such as meetings, interviews or workshops as well as different feedback mechanisms like surveys or helpdesk services at the CDS. The results provided by the users will be categorized as a function of CDS products so that their specific interests will be monitored and linked to the right product. Through this procedure, we will identify the requirements and criteria for data and products in order to build the correspondent recommendations for the improvement and expansion of the CDS datasets and products.Keywords: climate data store, Copernicus, quality, user engagement
Procedia PDF Downloads 146587 Factors Influencing the Logistics Services Providers' Performance: A Literature Overview
Authors: A. Aguezzoul
Abstract:
The Logistics Services Providers (LSPs) selection and performance is a strategic decision that affects the overall performance of any company as well as its supply chain. It is a complex process, which takes into account various conflicting quantitative and qualitative factors, as well as outsourced logistics activities. This article focuses on the evolution of the weights associated to these factors over the last years in order to better understand the change in the importance that logistics professionals place on them criteria when choosing their LSPs. For that, an analysis of 17 main studies published during 2014-2017 period was carried out and the results are compared to those of a previous literature review on this subject. Our analysis allowed us to deduce the following observations: 1) the LSPs selection is a multi-criteria process; 2) the empirical character of the majority of studies, conducted particularly in Asian countries; 3) the criteria importance has undergone significant changes following the emergence of information technologies that have favored the work in close collaboration and in partnership between the LSPs and their customers, even on a worldwide scale; 4) the cost criterion is relatively less important than in the past; and finally 5) with the development of sustainable supply chains, the factors associated with the logistic activities of return and waste processing (reverse logistics) are becoming increasingly important in this multi-criteria process of selection and evaluation of LSPs performance.Keywords: logistics outsourcing, logistics providers, multi-criteria decision making, performance
Procedia PDF Downloads 154586 Improving Electrical Safety through Enhanced Work Permits
Authors: Nuwan Karunarathna, Hemali Seneviratne
Abstract:
Distribution Utilities inherently present electrical hazards for their workers in addition to the general public especially due to bare overhead lines spreading out over a large geographical area. Therefore, certain procedures such as; de-energization, verification of de-energization, isolation, lock-out tag-out and earthing are carried out to ensure safe working conditions when conducting maintenance work on de-energized overhead lines. However, measures must be taken to coordinate the above procedures and to ensure successful and accurate execution of those procedures. Issuing of 'Work Permits' is such a measure that is used by the Distribution Utility considered in this paper. Unfortunately, the Work Permit method adopted by the Distribution Utility concerned here has not been successful in creating the safe working conditions as expected which was evidenced by four (4) number of fatalities of workers due to electrocution occurred in the Distribution Utility from 2016 to 2018. Therefore, this paper attempts to identify deficiencies in the Work Permit method and related contributing factors through careful analysis of the four (4) fatalities and work place practices to rectify the short comings to prevent future incidents. The analysis shows that the present level of coordination between the 'Authorized Person' who issues the work permit and the 'Competent Person' who performs the actual work is grossly inadequate to achieve the intended safe working conditions. The paper identifies the need of active participation of a 'Control Person' who oversees the whole operation from a bird’s eye perspective and recommends further measures that are derived through the analysis of the fatalities to address the identified lapses in the current work permit system.Keywords: authorized person, competent person, control person, de-energization, distribution utility, isolation, lock-out tag-out, overhead lines, work permit
Procedia PDF Downloads 131585 Study of Parking Demand for Offices – Case Study: Kolkata
Authors: Sanghamitra Roy
Abstract:
In recent times, India has experienced the phenomenal rise in the number of registered vehicles and vehicular trips, particularly intra-city trips in most of its urban areas. The increase in vehicle ownership and use have increased parking demand immensely and accommodating the same is now a matter of big concern. Most cities do not have adequate off-street parking facilities thus forcing people to park on the streets. This has resulted in decreased carrying capacity, decreased traffic speed, increased congestion, and increased environmental problems. While integrated multi-modal transportation system is the answer to such problems, parking issues will continue to exist. In Kolkata, only 6.4% land is devoted for roads. The consequences of this huge crunch in road spaces coupled with increased parking demand are severe particularly in the CBD and major commercial areas, making the role of off-street parking facilities in Kolkata even more critical. To meaningfully address parking issues, it is important to identify the factors that influence parking demand so that it can be assessed and comprehensive parking policies and plans for the city can be formulated. This paper aims at identifying the factors that contribute towards parking demand for offices in Kolkata and their degree of correlation with parking demand. The study is limited to home-to-work trips located within Kolkata Municipal Corporation (KMC) where parking related issues are most pronounced. The data for the study is collected through personal interviews, questionnaires and direct observations from offices across the wards of KMC. SPSS is used for classification of the data and analyses of the same. The findings of this study will help in re-assessment of the parking requirements specified in The Kolkata Municipal Corporation Building Rules as a step towards alleviating parking related issues in the city.Keywords: building rules, office spaces, parking demand, urbanization
Procedia PDF Downloads 317584 Numerical Analysis of Bearing Capacity of Caissons Subjected to Inclined Loads
Authors: Hooman Dabirmanesh, Mahmoud Ghazavi, Kazem Barkhordari
Abstract:
A finite element modeling for determination of the bearing capacity of caissons subjected to inclined loads is presented in this paper. The model investigates the uplift capacity of the caisson with varying cross sectional area. To this aim, the behavior of the soil is assumed to be elasto-plastic, and its failure is controlled by Modified Cam-Clay failure criterion. The simulation takes into account the couple analysis. The approach is verified using available data from other research work especially centrifuge data. Parametric studies are subsequently performed to investigate the effect of contributing parameters such as aspect ratio of the caisson, the loading rate, the loading direction angle, and points where the external load is applied. In addition, the influence of the caisson geometry is taken into account. The results show the bearing capacity of the caisson increases with increasing the taper angle. Hence, the pullout capacity will increase using the same material. In addition, the bearing capacity of caissons strongly depends on the suction that is generated at tip and in sealed surface on top of caisson. Other results concerning the influencing factors will be presented.Keywords: aspect ratio, finite element method, inclined load, modified Cam clay, taper angle, undrained condition
Procedia PDF Downloads 263583 A Detailed Computational Investigation into Copper Catalyzed Sonogashira Coupling Reaction
Authors: C. Rajalakshmi, Vibin Ipe Thomas
Abstract:
Sonogashira coupling reactions are widely employed in the synthesis of molecules of biological and pharmaceutical importance. Copper catalyzed Sonogashira coupling reactions are gaining importance owing to the low cost and less toxicity of copper as compared to the palladium catalyst. In the present work, a detailed computational study has been carried out on the Sonogashira coupling reaction between aryl halides and terminal alkynes catalyzed by Copper (I) species with trans-1, 2 Diaminocyclohexane as ligand. All calculations are performed at Density Functional Theory (DFT) level, using the hybrid Becke3LYP functional. Cu and I atoms are described using an effective core potential (LANL2DZ) for the inner electrons and its associated double-ζ basis set for the outer electrons. For all other atoms, 6-311G+* basis set is used. We have identified that the active catalyst species is a neutral 3-coordinate trans-1,2 diaminocyclohexane ligated Cu (I) alkyne complex and found that the oxidative addition and reductive elimination occurs in a single step proceeding through one transition state. This is owing to the ease of reductive elimination involving coupling of Csp2-Csp carbon atoms and the less stable Cu (III) intermediate. This shows the mechanism of copper catalyzed Sonogashira coupling reactions are quite different from those catalyzed by palladium. To gain further insights into the mechanism, substrates containing various functional groups are considered in our study to traverse their effect on the feasibility of the reaction. We have also explored the effect of ligand on the catalytic cycle of the coupling reaction. The theoretical results obtained are in good agreement with the experimental observation. This shows the relevance of a combined theoretical and experimental approach for rationally improving the cross-coupling reaction mechanisms.Keywords: copper catalysed, density functional theory, reaction mechanism, Sonogashira coupling
Procedia PDF Downloads 116582 Surface Sediment Quality Assessment in a Coastal Lagoon (NW Adriatic Sea) Based on SEM-AVS Analysis
Authors: Roberta Guerra, Juan Pablo Pozo Hernandez
Abstract:
Surface sediments from the coastal lagoon of Pialassa Piomboni in the NW Adriatic Sea were collected and analysed and the potential ecological risks in the area were assessed based on the acid-volatile sulphide (AVS) model. The AVS levels are between 0.03 and 8.8 µmol g-1, with the average at 3.1 µmol g-1. The simultaneously extracted metals (∑SEM), which is the molar sum of Cd, Cu, Ni, Pb, and Zn, range from 0.3 to 6.6 µmol g-1, with the average at 1.7 µmol g-1. Most of the high ∑SEM concentrations are located in the southern area of the lagoon. [SEM]Zn had the comparatively high mean concentration (1.4 µmol g-1), and a maximum value of 6.1 µmol g-1, respectively. Concentrations of [SEM]Cd, [SEM]Cu, [SEM]Ni, and [SEM]Pb were consistently lower, with maximum values of 0.007 µmol g-1, 1.4 µmol g-1, 0.3 µmol g-1 and 0.2 µmol g-1, respectively. Compared to other metals, [SEM]Zn was the dominant component in all samples and accounted for approximately 31 - 93% of the ∑SEM, whereas the contribution of Cd – the most toxic metal studied – to ∑SEM was no more than 1%. According to the USEPA evaluation method, the sediment samples can be divided into the three following categories: category 1, adverse biological effects on aquatic life may be expected when ([SEM]–[AVS])/fOC > 3000; category 2, adverse effects on aquatic life are uncertain when ([SEM]–[AVS])/fOC = 130 to 3,000; and category 3, no indication of adverse effects when ([SEM]–[AVS])/fOC < 130. Most of the surface sediments of the Pialassa Piomboni lagoon (>90%) had no adverse biological effects according to the criterion proposed by the USEPA; while adverse effects were uncertain in few stations (~2%).Keywords: sediment quality, heavy metals, coastal lagoon, bioavailability, SEM, AVS
Procedia PDF Downloads 405581 A PROMETHEE-BELIEF Approach for Multi-Criteria Decision Making Problems with Incomplete Information
Abstract:
Multi-criteria decision aid methods consider decision problems where numerous alternatives are evaluated on several criteria. These methods are used to deal with perfect information. However, in practice, it is obvious that this information requirement is too much strict. In fact, the imperfect data provided by more or less reliable decision makers usually affect decision results since any decision is closely linked to the quality and availability of information. In this paper, a PROMETHEE-BELIEF approach is proposed to help multi-criteria decisions based on incomplete information. This approach solves problems with incomplete decision matrix and unknown weights within PROMETHEE method. On the base of belief function theory, our approach first determines the distributions of belief masses based on PROMETHEE’s net flows and then calculates weights. Subsequently, it aggregates the distribution masses associated to each criterion using Murphy’s modified combination rule in order to infer a global belief structure. The final action ranking is obtained via pignistic probability transformation. A case study of real-world application concerning the location of a waste treatment center from healthcare activities with infectious risk in the center of Tunisia is studied to illustrate the detailed process of the BELIEF-PROMETHEE approach.Keywords: belief function theory, incomplete information, multiple criteria analysis, PROMETHEE method
Procedia PDF Downloads 166580 Application of Simulated Annealing to Threshold Optimization in Distributed OS-CFAR System
Authors: L. Abdou, O. Taibaoui, A. Moumen, A. Talib Ahmed
Abstract:
This paper proposes an application of the simulated annealing to optimize the detection threshold in an ordered statistics constant false alarm rate (OS-CFAR) system. Using conventional optimization methods, such as the conjugate gradient, can lead to a local optimum and lose the global optimum. Also for a system with a number of sensors that is greater than or equal to three, it is difficult or impossible to find this optimum; Hence, the need to use other methods, such as meta-heuristics. From a variety of meta-heuristic techniques, we can find the simulated annealing (SA) method, inspired from a process used in metallurgy. This technique is based on the selection of an initial solution and the generation of a near solution randomly, in order to improve the criterion to optimize. In this work, two parameters will be subject to such optimisation and which are the statistical order (k) and the scaling factor (T). Two fusion rules; “AND” and “OR” were considered in the case where the signals are independent from sensor to sensor. The results showed that the application of the proposed method to the problem of optimisation in a distributed system is efficiency to resolve such problems. The advantage of this method is that it allows to browse the entire solutions space and to avoid theoretically the stagnation of the optimization process in an area of local minimum.Keywords: distributed system, OS-CFAR system, independent sensors, simulating annealing
Procedia PDF Downloads 497579 Evaluation of Liquefaction Potential of Fine Grained Soil: Kerman Case Study
Authors: Reza Ziaie Moayed, Maedeh Akhavan Tavakkoli
Abstract:
This research aims to investigate and evaluate the liquefaction potential in a project in Kerman city based on different methods for fine-grained soils. Examining the previous damages caused by recent earthquakes, it has been observed that fine-grained soils play an essential role in the level of damage caused by soil liquefaction. But, based on previous investigations related to liquefaction, there is limited attention to evaluating the cyclic resistance ratio for fine-grain soils, especially with the SPT method. Although using a standard penetration test (SPT) to find the liquefaction potential of fine-grain soil is not common, it can be a helpful method based on its rapidness, serviceability, and availability. In the present study, the liquefaction potential has been first determined by the soil’s physical properties obtained from laboratory tests. Then, using the SPT test and its available criterion for evaluating the cyclic resistance ratio and safety factor of liquefaction, the correction of effecting fine-grained soils is made, and then the results are compared. The results show that using the SPT test for liquefaction is more accurate than using laboratory tests in most cases due to the contribution of different physical parameters of soil, which leads to an increase in the ultimate N₁(60,cs).Keywords: liquefaction, cyclic resistance ratio, SPT test, clay soil, cohesion soils
Procedia PDF Downloads 101578 Finite Element Analysis of Resonance Frequency Shift of Laminated Composite Beam
Authors: Cheng Yang Kwa, Yoke Rung Wong
Abstract:
Laminated composite materials are widely employed in automotive, aerospace, and other industries. These materials provide distinct benefits due to their high specific strength, high specific modulus, and ability to be customized for a specific function. However, delamination of laminated composite materials is one of the main defects which can occur during manufacturing, regular operations, or maintenance. Delamination can bring about considerable internal damage, unobservable by visual check, that causes significant loss in strength and stability, leading to composite structure catastrophic failure. Structural health monitoring (SHM) is known to be the automated method for monitoring and evaluating the condition of a monitored object. There are several ways to conduct SHM in aerospace. One of the effective methods is to monitor the natural frequency shift of structure due to the presence of defect. This study investigated the mechanical resonance frequency shift of a multi-layer composite cantilever beam due to interlaminar delamination. ANSYS Workbench® was used to create a 4-plies laminated composite cantilever finite element model with [90/0]s fiber setting. Epoxy Carbon UD (230GPA) Prepreg was chosen, and the thickness was 2.5mm for each ply. The natural frequencies of the finite element model with various degree of delamination were simulated based on modal analysis and then validated by using literature. It was shown that the model without delamination had natural frequency of 40.412 Hz, which was 1.55% different from the calculated result (41.050 Hz). Thereafter, the various degree of delamination was mimicked by changing the frictional conditions at the middle ply-to-ply interface. The results suggested that delamination in the laminated composite cantilever induced a change in its stiffness which alters its mechanical resonance frequency.Keywords: structural health monitoring, NDT, cantilever, laminate
Procedia PDF Downloads 101577 Distributive Justice through Constitution
Authors: Rohtash
Abstract:
Academically, the concept of Justice in the literature is vast, and theories are voluminous and definitions are numerous but it is very difficult to define. Through the ages, justice has been evolving and developing reasoning that how individuals and communities do the right thing that is just and fair to all in that society. Justice is a relative and dynamic concept, not absolute one. It is different in different societies based on their morality and ethics. The idea of justice cannot arise from a single morality but interaction of competing moralities and contending perspectives. Justice is the conditional and circumstantial term. Therefore, justice takes different meanings in different contexts. Justice is the application of the Laws. It is a values-based concept in order to protect the rights and liberties of the people. It is a socially created concept that has no physical reality. It exists in society on the basis of the spirit of sharing by the communities and members of society. The conception of justice in society or among communities and individuals is based on their social coordination. It can be effective only when people’s judgments are based on collective reasoning. Their behavior is shaped by social values, norms and laws. People must accept, share and respect the set of principles for delivering justice. Thus justice can be a reasonable solution to conflicts and to coordinate behavior in society. The subject matter of distributive justice is the Public Good and societal resources that should be evenly distributed among the different sections of society on the principles developed and established by the State through legislation, public policy and Executive orders. The Socioeconomic transformation of the society is adopted by the constitution within the limit of its morality and gives a new dimension to transformative justice. Therefore, both Procedural and Transformative justice is part of Distributive justice. Distributive justice is purely an economic phenomenon. It concerns the allocation of resources among the communities and individuals. The subject matter of distributive justice is the distribution of rights, responsibilities, burdens and benefits in society on the basis of the capacity and capability of individuals.Keywords: distributive justice, constitutionalism, institutionalism, constitutional morality
Procedia PDF Downloads 83576 Integrated Risk Management in The Supply Chain of Essential Medicines in Zambia
Authors: Mario M. J. Musonda
Abstract:
Access to health care is a human right, which includes having timely access to affordable and quality essential medicines at the right place and in sufficient quantity. However, inefficient public sector supply chain management contributes to constant shortages of essential medicines at health facilities. Literature review involved a desktop study of published research studies and reports on risk management, supply chain management of essential medicines and their integration to increase the efficiency of the latter. The research was conducted on a sample population of offices under Ministry of Health Headquarters, Lusaka Provincial and District Offices, selected health facilities in Lusaka, Medical Stores Limited, Zambia Medicines Regulatory Authority and Cooperating Partners. Individuals involved in study were selected judgmentally by their functions under selection and quantification, regulation, procurement, storage, distribution, quality assurance, and dispensing of essential medicines. Structured interviews and discussions were held with selected experts and self-administered questionnaires were distributed. Collected and analysed data of 35 returned and usable questionnaires from the 50 distributed. The highest prioritised risks were; inadequate and inconsistent fund disbursements, weak information management systems, weak quality management systems and insufficient resources (HR and infrastructure) among others. The results for this research can be used to increase the efficiency of the public sector supply chain of essential medicines and other pharmaceuticals. The results of the study showed that there is need to implement effective risk management systems by participating institutions and organisations to increase the efficiency of the entire supply chain in order to avoid and/or reduce shortages of essential medicines at health facilities.Keywords: essential medicine, risk assessment, risk management, supply chain, supply chain risk management
Procedia PDF Downloads 443575 Impact Factor Analysis for Spatially Varying Aerosol Optical Depth in Wuhan Agglomeration
Authors: Wenting Zhang, Shishi Liu, Peihong Fu
Abstract:
As an indicator of air quality and directly related to concentration of ground PM2.5, the spatial-temporal variation and impact factor analysis of Aerosol Optical Depth (AOD) have been a hot spot in air pollution. This paper concerns the non-stationarity and the autocorrelation (with Moran’s I index of 0.75) of the AOD in Wuhan agglomeration (WHA), in central China, uses the geographically weighted regression (GRW) to identify the spatial relationship of AOD and its impact factors. The 3 km AOD product of Moderate Resolution Imaging Spectrometer (MODIS) is used in this study. Beyond the economic-social factor, land use density factors, vegetable cover, and elevation, the landscape metric is also considered as one factor. The results suggest that the GWR model is capable of dealing with spatial varying relationship, with R square, corrected Akaike Information Criterion (AICc) and standard residual better than that of ordinary least square (OLS) model. The results of GWR suggest that the urban developing, forest, landscape metric, and elevation are the major driving factors of AOD. Generally, the higher AOD trends to located in the place with higher urban developing, less forest, and flat area.Keywords: aerosol optical depth, geographically weighted regression, land use change, Wuhan agglomeration
Procedia PDF Downloads 357574 Modeling of Ductile Fracture Using Stress-Modified Critical Strain Criterion for Typical Pressure Vessel Steel
Authors: Carlos Cuenca, Diego Sarzosa
Abstract:
Ductile fracture occurs by the mechanism of void nucleation, void growth and coalescence. Potential sites for initiation are second phase particles or non-metallic inclusions. Modelling of ductile damage at the microscopic level is very difficult and complex task for engineers. Therefore, conservative predictions of ductile failure using simple models are necessary during the design and optimization of critical structures like pressure vessels and pipelines. Nowadays, it is well known that the initiation phase is strongly influenced by the stress triaxiality and plastic deformation at the microscopic level. Thus, a simple model used to study the ductile failure under multiaxial stress condition is the Stress Modified Critical Strain (SMCS) approach. Ductile rupture has been study for a structural steel under different stress triaxiality conditions using the SMCS method. Experimental tests are carried out to characterize the relation between stress triaxiality and equivalent plastic strain by notched round bars. After calibration of the plasticity and damage properties, predictions are made for low constraint bending specimens with and without side grooves. Stress/strain fields evolution are compared between the different geometries. Advantages and disadvantages of the SMCS methodology are discussed.Keywords: damage, SMSC, SEB, steel, failure
Procedia PDF Downloads 297573 Perception of Public Transport Quality of Service among Regular Private Vehicle Users in Five European Cities
Authors: Juan de Ona, Esperanza Estevez, Rocío de Ona
Abstract:
Urban traffic levels can be reduced by drawing travelers away from private vehicles over to using public transport. This modal change can be achieved by either introducing restrictions on private vehicles or by introducing measures which increase people’s satisfaction with public transport. For public transport users, quality of service affects customer satisfaction, which, in turn, influences the behavioral intentions towards the service. This paper intends to identify the main attributes which influence the perception private vehicle users have about the public transport services provided in five European cities: Berlin, Lisbon, London, Madrid and Rome. Ordinal logit models have been applied to an online panel survey with a sample size of 2,500 regular private vehicle users (approximately 500 inhabitants per city). To achieve a comprehensive analysis and to deal with heterogeneity in perceptions, 15 models have been developed for the entire sample and 14 user segments. The results show differences between the cities and among the segments. Madrid was taken as reference city and results indicate that the inhabitants are satisfied with public transport in Madrid and that the most important public transport service attributes for private vehicle users are frequency, speed and intermodality. Frequency is an important attribute for all the segments, while speed and intermodality are important for most of the segments. An analysis by segments has identified attributes which, although not important in most cases, are relevant for specific segments. This study also points out important differences between the five cities. Findings from this study can be used to develop policies and recommendations for persuading.Keywords: service quality, satisfaction, public transportation, private vehicle users, car users, segmentation, ordered logit
Procedia PDF Downloads 117572 Meta Model for Optimum Design Objective Function of Steel Frames Subjected to Seismic Loads
Authors: Salah R. Al Zaidee, Ali S. Mahdi
Abstract:
Except for simple problems of statically determinate structures, optimum design problems in structural engineering have implicit objective functions where structural analysis and design are essential within each searching loop. With these implicit functions, the structural engineer is usually enforced to write his/her own computer code for analysis, design, and searching for optimum design among many feasible candidates and cannot take advantage of available software for structural analysis, design, and searching for the optimum solution. The meta-model is a regression model used to transform an implicit objective function into objective one and leads in turn to decouple the structural analysis and design processes from the optimum searching process. With the meta-model, well-known software for structural analysis and design can be used in sequence with optimum searching software. In this paper, the meta-model has been used to develop an explicit objective function for plane steel frames subjected to dead, live, and seismic forces. Frame topology is assumed as predefined based on architectural and functional requirements. Columns and beams sections and different connections details are the main design variables in this study. Columns and beams are grouped to reduce the number of design variables and to make the problem similar to that adopted in engineering practice. Data for the implicit objective function have been generated based on analysis and assessment for many design proposals with CSI SAP software. These data have been used later in SPSS software to develop a pure quadratic nonlinear regression model for the explicit objective function. Good correlations with a coefficient, R2, in the range from 0.88 to 0.99 have been noted between the original implicit functions and the corresponding explicit functions generated with meta-model.Keywords: meta-modal, objective function, steel frames, seismic analysis, design
Procedia PDF Downloads 243571 Characteristics of the Particle Size Distribution and Exposure Concentrations of Nanoparticles Generated from the Laser Metal Deposition Process
Authors: Yu-Hsuan Liu, Ying-Fang Wang
Abstract:
The objectives of the present study are to characterize nanoparticles generated from the laser metal deposition (LMD) process and to estimate particle concentrations deposited in the head (H), that the tracheobronchial (TB) and alveolar (A) regions, respectively. The studied LMD chamber (3.6m × 3.8m × 2.9m) is installed with a robot laser metal deposition machine. Direct-reading instrument of a scanning mobility particle sizer (SMPS, Model 3082, TSI Inc., St. Paul, MN, USA) was used to conduct static sampling inside the chamber for nanoparticle number concentration and particle size distribution measurements. The SMPS obtained particle number concentration at every 3 minutes, the diameter of the SMPS ranged from 11~372 nm when the aerosol and sheath flow rates were set at 0.6 and 6 L / min, respectively. The resultant size distributions were used to predict depositions of nanoparticles at the H, TB, and A regions of the respiratory tract using the UK National Radiological Protection Board’s (NRPB’s) LUDEP Software. Result that the number concentrations of nanoparticles in indoor background and LMD chamber were 4.8×10³ and 4.3×10⁵ # / cm³, respectively. However, the nanoparticles emitted from the LMD process was in the form of the uni-modal with number median diameter (NMD) and geometric standard deviation (GSD) as 142nm and 1.86, respectively. The fractions of the nanoparticles deposited on the alveolar region (A: 69.8%) were higher than the other two regions of the head region (H: 10.9%), tracheobronchial region (TB: 19.3%). This study conducted static sampling to measure the nanoparticles in the LMD process, and the results show that the fraction of particles deposited on the A region was higher than the other two regions. Therefore, applying the characteristics of nanoparticles emitted from LMD process could be provided valuable scientific-based evidence for exposure assessments in the future.Keywords: exposure assessment, laser metal deposition process, nanoparticle, respiratory region
Procedia PDF Downloads 284570 Sleep Paralysis: Its Genesis and Qualitative Analysis of Case Histories
Authors: Nandita Chaube, S. S. Nathawat
Abstract:
Sleep paralysis is a state of sleep disturbance in which people experience hypnogogic or hypnopompic hallucinations marked by an inability to move their bodies or speak out while reporting the consciousness about their surroundings. Philosophical explanation of sleep paralysis has been quoted in the ancient texts in terms of incubus and succubus. However, pathologically, it has been linked to several disorders including narcolepsy, migraines, anxiety disorders, and obstructive sleep apnea but it can also occur in isolation. Some other significant factors may include perceived stress, spiritual and paranormal beliefs, etc. Hence, a qualitative analysis of five such cases reporting symptoms of sleep disturbances with the criterion of sleep paralysis has been reported here. The study considered various psychological factors like stressful life events, feelings of inadequacy, spirituality, and paranormal beliefs. Results disclosed that four of the five cases were inclined towards the paranormal beliefs and the entire sample indicated a noticeably augmented level of spirituality and feelings of inadequacy. Furthermore, three cases reported experiencing greater stress following life events. Among other factors, all the cases were characterized with sleeping in the supine position, sleeping alone, an experience of fear, a sense of pressure on their chest, a presence of someone in the room and increased level of feelings of inadequacy.Keywords: genesis, inadequacy, paranormal, sleep-paralysis, spiritual, stress
Procedia PDF Downloads 246569 Detecting and Thwarting Interest Flooding Attack in Information Centric Network
Authors: Vimala Rani P, Narasimha Malikarjunan, Mercy Shalinie S
Abstract:
Data Networking was brought forth as an instantiation of information-centric networking. The attackers can send a colossal number of spoofs to take hold of the Pending Interest Table (PIT) named an Interest Flooding attack (IFA) since the in- interests are recorded in the PITs of the intermediate routers until they receive corresponding Data Packets are go beyond the time limit. These attacks can be detrimental to network performance. PIT expiration rate or the Interest satisfaction rate, which cannot differentiate the IFA from attacks, is the criterion Traditional IFA detection techniques are concerned with. Threshold values can casually affect Threshold-based traditional methods. This article proposes an accurate IFA detection mechanism based on a Multiple Feature-based Extreme Learning Machine (MF-ELM). Accuracy of the attack detection can be increased by presenting the entropy of Internet names, Interest satisfaction rate and PIT usage as features extracted in the MF-ELM classifier. Furthermore, we deploy a queue-based hostile Interest prefix mitigation mechanism. The inference of this real-time test bed is that the mechanism can help the network to resist IFA with higher accuracy and efficiency.Keywords: information-centric network, pending interest table, interest flooding attack, MF-ELM classifier, queue-based mitigation strategy
Procedia PDF Downloads 205568 Deep Learning to Improve the 5G NR Uplink Control Channel
Authors: Ahmed Krobba, Meriem Touzene, Mohamed Debeyche
Abstract:
The wireless communications system (5G) will provide more diverse applications and higher quality services for users compared to the long-term evolution 4G (LTE). 5G uses a higher carrier frequency, which suffers from information loss in 5G coverage. Most 5G users often cannot obtain high-quality communications due to transmission channel noise and channel complexity. Physical Uplink Control Channel (PUCCH-NR: Physical Uplink Control Channel New Radio) plays a crucial role in 5G NR telecommunication technology, which is mainly used to transmit link control information uplink (UCI: Uplink Control Information. This study based of evaluating the performance of channel physical uplink control PUCCH-NR under low Signal-to-Noise Ratios with various antenna numbers reception. We propose the artificial intelligence approach based on deep neural networks (Deep Learning) to estimate the PUCCH-NR channel in comparison with this approach with different conventional methods such as least-square (LS) and minimum-mean-square-error (MMSE). To evaluate the channel performance we use the block error rate (BLER) as an evaluation criterion of the communication system. The results show that the deep neural networks method gives best performance compared with MMSE and LSKeywords: 5G network, uplink (Uplink), PUCCH channel, NR-PUCCH channel, deep learning
Procedia PDF Downloads 82