Search results for: distributed dislocation technique
7685 Political Deprivations, Political Risk and the Extent of Skilled Labor Migration from Pakistan: Finding of a Time-Series Analysis
Authors: Syed Toqueer Akhter, Hussain Hamid
Abstract:
Over the last few decades an upward trend has been observed in the case of labor migration from Pakistan. The emigrants are not just economically motivated and in search of a safe living environment towards more developed countries in Europe, North America and Middle East. The opportunity cost of migration comes in the form of brain drain that is the loss of qualified and skilled human capital. Throughout the history of Pakistan, situations of political instability have emerged ranging from violation of political rights, political disappearances to political assassinations. Providing security to the citizens is a major issue faced in Pakistan due to increase in crime and terrorist activities. The aim of the study is to test the impact of political instability, appearing in the form of political terror, violation of political rights and civil liberty on skilled migration of labor. Three proxies are used to measure the political instability; political terror scale (based on a scale of 1-5, the political terror and violence that a country encounters in a particular year), political rights (a rating of 1-7, that describes political rights as the ability for the people to participate without restraint in political process) and civil liberty (a rating of 1-7, civil liberty is defined as the freedom of expression and rights without government intervention). Using time series data from 1980-2011, the distributed lag models were used for estimation because migration is not a onetime process, previous events and migration can lead to more migration. Our research clearly shows that political instability appearing in the form of political terror, political rights and civil liberty all appeared significant in explaining the extent of skilled migration of Pakistan.Keywords: skilled labor migration, political terror, political rights, civil liberty, distributed lag model
Procedia PDF Downloads 10277684 Optical Characterization and Surface Morphology of SnO2 Thin Films Prepared by Spin Coating Technique
Authors: J. O. Ajayi, S. S. Oluyamo, D. B. Agunbiade
Abstract:
In this work, tin oxide thin films (SnO2) were prepared using the spin coating technique. The effects of precursor concentration on the thin film properties were investigated. Tin oxide was synthesized from anhydrous Tin (II) Chloride (SnCl2) dispersed in Methanol and Acetic acid. The metallic oxide (SnO2) films deposited were characterized using the UV Spectrophotometer and the Scanning Electron Microscope (SEM). From the absorption spectra, absorption increases with decrease in precursor concentration. Absorbance in the VIS region is lower than 0 % at higher concentration. The optical transmission spectrum shows that transmission increases as the concentration of precursor decreases and the maximum transmission in visible region is about 90% for films prepared with 0.2 M. Also, there is increase in the reflectance of thin films as concentration of precursor increases. The films have high transparency (more than 85%) and low reflectance (less than 40%) in the VIS region. Investigation showed that the direct band gap value increased from 3.79eV, to 3.82eV as the precursor concentration decreased from 0.6 M to 0.2 M. Average direct bandgap energy for all the tin oxide films was estimated to be 3.80eV. The effect of precursor concentration was directly observed in crystal outgrowth and surface particle densification. They were found to increase proportionately with higher concentration.Keywords: anhydrous TIN (II) chloride, densification, NIS- VIS region, spin coating technique
Procedia PDF Downloads 2607683 To Study the New Invocation of Biometric Authentication Technique
Authors: Aparna Gulhane
Abstract:
Biometrics is the science and technology of measuring and analyzing biological data form the basis of research in biological measuring techniques for the purpose of people identification and recognition. In information technology, biometrics refers to technologies that measure and analyze human body characteristics, such as DNA, fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements. Biometric systems are used to authenticate the person's identity. The idea is to use the special characteristics of a person to identify him. These papers present a biometric authentication techniques and actual deployment of potential by overall invocation of biometrics recognition, with an independent testing of various biometric authentication products and technology.Keywords: types of biometrics, importance of biometric, review for biometrics and getting a new implementation, biometric authentication technique
Procedia PDF Downloads 3197682 Data-Driven Simulations Tools for Der and Battery Rich Power Grids
Authors: Ali Moradiamani, Samaneh Sadat Sajjadi, Mahdi Jalili
Abstract:
Power system analysis has been a major research topic in the generation and distribution sections, in both industry and academia, for a long time. Several load flow and fault analysis scenarios have been normally performed to study the performance of different parts of the grid in the context of, for example, voltage and frequency control. Software tools, such as PSCAD, PSSE, and PowerFactory DIgSILENT, have been developed to perform these analyses accurately. Distribution grid had been the passive part of the grid and had been known as the grid of consumers. However, a significant paradigm shift has happened with the emergence of Distributed Energy Resources (DERs) in the distribution level. It means that the concept of power system analysis needs to be extended to the distribution grid, especially considering self sufficient technologies such as microgrids. Compared to the generation and transmission levels, the distribution level includes significantly more generation/consumption nodes thanks to PV rooftop solar generation and battery energy storage systems. In addition, different consumption profile is expected from household residents resulting in a diverse set of scenarios. Emergence of electric vehicles will absolutely make the environment more complicated considering their charging (and possibly discharging) requirements. These complexities, as well as the large size of distribution grids, create challenges for the available power system analysis software. In this paper, we study the requirements of simulation tools in the distribution grid and how data-driven algorithms are required to increase the accuracy of the simulation results.Keywords: smart grids, distributed energy resources, electric vehicles, battery storage systsms, simulation tools
Procedia PDF Downloads 1027681 Process Safety Evaluation of a Nuclear Power Plant through Virtual Process Hazard Analysis (PHA) using the What-If Technique
Authors: Lormaine Anne Branzuela, Elysa Largo, Julie Marisol Pagalilauan, Neil Concibido, Monet Concepcion Detras
Abstract:
Energy is a necessity both for the people and the country. The demand for energy is continually increasing, but the supply is not doing the same. The reopening of the Bataan Nuclear Power Plant (BNPP) in the Philippines has been circulating in the media for the current time. The general public has been hesitant in accepting the inclusion of nuclear energy in the Philippine energy mix due to perceived unsafe conditions of the plant. This study evaluated the possible operations of a nuclear power plant, which is of the same type as the BNPP, considering the safety of the workers, the public, and the environment using a Process Hazard Analysis (PHA) method. What-If Technique was utilized to identify the hazards and consequences on the operations of the plant, together with the level of risk it entails. Through the brainstorming sessions of the PHA team, it was found that the most critical system on the plant is the primary system. Possible leakages on pipes and equipment due to weakened seals and welds and blockages on coolant path due to fouling were the most common scenarios identified, which further caused the most critical scenario – radioactive leak through sump contamination, nuclear meltdown, and equipment damage and explosion which could result to multiple injuries and fatalities, and environmental impacts.Keywords: process safety management, process hazard analysis, what-If technique, nuclear power plant
Procedia PDF Downloads 2197680 Rectenna Modeling Based on MoM-GEC Method for RF Energy Harvesting
Authors: Soulayma Smirani, Mourad Aidi, Taoufik Aguili
Abstract:
Energy harvesting has arisen as a prominent research area for low power delivery to RF devices. Rectennas have become a key element in this technology. In this paper, electromagnetic modeling of a rectenna system is presented. In our approach, a hybrid technique was demonstrated to associate both the method of auxiliary sources (MAS) and MoM-GEC (the method of moments combined with the generalized equivalent circuit technique). Auxiliary sources were used in order to substitute specific electronic devices. Therefore, a simple and controllable model is obtained. Also, it can easily be interconnected to form different topologies of rectenna arrays for more energy harvesting. At last, simulation results show the feasibility and simplicity of the proposed rectenna model with high precision and computation efficiency.Keywords: computational electromagnetics, MoM-GEC method, rectennas, RF energy harvesting
Procedia PDF Downloads 1697679 Spatial REE Geochemical Modeling at Lake Acıgöl, Denizli, Turkey: Analytical Approaches on Spatial Interpolation and Spatial Correlation
Authors: M. Budakoglu, M. Karaman, A. Abdelnasser, M. Kumral
Abstract:
The spatial interpolation and spatial correlation of the rare earth elements (REE) of lake surface sediments of Lake Acıgöl and its surrounding lithological units is carried out by using GIS techniques like Inverse Distance Weighted (IDW) and Geographically Weighted Regression (GWR) techniques. IDW technique which makes the spatial interpolation shows that the lithological units like Hayrettin Formation at north of Lake Acigol have high REE contents than lake sediments as well as ∑LREE and ∑HREE contents. However, Eu/Eu* values (based on chondrite-normalized REE pattern) show high value in some lake surface sediments than in lithological units and that refers to negative Eu-anomaly. Also, the spatial interpolation of the V/Cr ratio indicated that Acıgöl lithological units and lake sediments deposited in in oxic and dysoxic conditions. But, the spatial correlation is carried out by GWR technique. This technique shows high spatial correlation coefficient between ∑LREE and ∑HREE which is higher in the lithological units (Hayrettin Formation and Cameli Formation) than in the other lithological units and lake surface sediments. Also, the matching between REEs and Sc and Al refers to REE abundances of Lake Acıgöl sediments weathered from local bedrock around the lake.Keywords: spatial geochemical modeling, IDW, GWR techniques, REE, lake sediments, Lake Acıgöl, Turkey
Procedia PDF Downloads 5527678 Advanced Approach to Analysis the Thin Strip Profile in Cold Rolling of Pair Roll Crossing and Shifting Mill Using an Arbitrary Lagrangian-Eulerian Technique
Authors: Abdulrahman Aljabri, Essam R. I. Mahmoud, Hamad Almohamedi, Zhengyi Jiang
Abstract:
Cold rolled thin strip has received intensive attention through technological and theoretical progress in the rolling process, as well as researchers have focused on its control during rolling as an essential parameter for producing thinner strip with good shape and profile. An advanced approach has been proposed to analysis the thin strip profile in cold rolling of pair roll crossing and shifting mill using Finite Element Analysis (FEA) with an ALE technique. The ALE (Arbitrary Lagrangian-Eulerian) techniques to enable more flexibility of the ALE technique in the adjustment of the finite element mesh, which provides a significant tool for simulating the thin strip under realistic rolling process constraint and provide accurate model results. The FEA can provide theoretical basis for the 3D model of controlling the strip shape and profile in thin strip rolling, and deliver an optimal rolling process parameter, and suggest corrective changes during cold rolling of thin strip.Keywords: pair roll crossing, work roll shifting, strip shape and profile, finite element modeling
Procedia PDF Downloads 957677 Microscopic and Mesoscopic Deformation Behaviors of Mg-2Gd Alloy with or without Li Addition
Authors: Jing Li, Li Jin, Fulin Wang, Jie Dong, Wenjiang Ding
Abstract:
Mg-Li dual-phase alloy exhibits better combination of yield strength and elongation than the Mg single-phase alloy. To exploit its deformation behavior, the deformation mechanisms of Mg-2Gd alloy with or without Li addition, i.e., Mg-6Li-2Gd and Mg-2Gd alloy, have been studied at both microscale and mesoscale. EBSD-assisted slip trace, twin trace, and texture evolution analysis show that the α-Mg phase of Mg-6Li-2Gd alloy exhibits different microscopic deformation mechanisms with the Mg-2Gd alloy, i.e., mainly prismatic slip in the former one, while basal slip, prismatic slip and extension twin in the latter one. Further Schmid factor analysis results attribute this different intra-phase deformation mechanisms to the higher critical resolved shear stress (CRSS) value of extension twin and lower ratio of CRSSprismatic /CRSSbasal in the α-Mg phase of Mg-6Li-2Gd alloy. Additionally, Li addition can induce dual-phase microstructure in the Mg-6Li-2Gd alloy, leading to the formation of hetero-deformation induced (HDI) stress at the mesoscale. This can be evidenced by the hysteresis loops appearing during the loading-unloading-reloading (LUR) tensile tests and the activation of multiple slip activity in the α-Mg phase neighboring β-Li phase. The Mg-6Li-2Gd alloy shows higher yield strength is due to the harder α-Mg phase arising from solid solution hardening of Li addition, as well asthe strengthening of soft β-Li phase by the HDI stress during yield stage. Since the strain hardening rate of Mg-6Li-2Gd alloy is lower than that of Mg-2Gd alloy after ~2% strain, which is partly due to the weak contribution of HDI stress, Mg-6Li-2Gd alloy shows no obvious increase of uniform elongation than the Mg-2Gd alloy.But since the β-Li phase is effective in blunting the crack tips, the Mg-6Li-2Gd alloy shows ununiform elongation, which, thus, leads to the higher total elongation than the Mg-2Gd alloy.Keywords: Mg-Li-Gd dual-phase alloy, phase boundary, HDI stress, dislocation slip activity, mechanical properties
Procedia PDF Downloads 2007676 Chinese Event Detection Technique Based on Dependency Parsing and Rule Matching
Authors: Weitao Lin
Abstract:
To quickly extract adequate information from large-scale unstructured text data, this paper studies the representation of events in Chinese scenarios and performs the regularized abstraction. It proposes a Chinese event detection technique based on dependency parsing and rule matching. The method first performs dependency parsing on the original utterance, then performs pattern matching at the word or phrase granularity based on the results of dependent syntactic analysis, filters out the utterances with prominent non-event characteristics, and obtains the final results. The experimental results show the effectiveness of the method.Keywords: natural language processing, Chinese event detection, rules matching, dependency parsing
Procedia PDF Downloads 1377675 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack
Authors: Vincent Andrew Cappellano
Abstract:
In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.Keywords: architecture, resiliency, availability, cyber-attack
Procedia PDF Downloads 1067674 Role of Water Supply in the Functioning of the MLDB Systems
Authors: Ramanpreet Kaur, Upasana Sharma
Abstract:
The purpose of this paper is to address the challenges faced by MLDB system at the piston foundry plant due to interruption in supply of water. For the MLDB system to work in Model, two sub-units must be connected to the robotic main unit. The system cannot function without robotics and water supply by the fan (WSF). Insufficient water supply is the cause of system failure. The system operates at top performance using two sub-units. If one sub-unit fails, the system capacity is reduced. Priority of repair is given to the main unit i.e. Robotic and WSF. To solve the problem, semi-Markov process and regenerative point technique are used. Relevant graphs are also included to particular case.Keywords: MLDB system, robotic, semi-Markov process, regenerative point technique
Procedia PDF Downloads 747673 Multiple Fault Diagnosis in Digital Circuits using Critical Path Tracing and Enhanced Deduction Algorithm
Authors: Mohamed Mahmoud
Abstract:
This paper has developed an effect-cause analysis technique for fault diagnosis in digital circuits. The main algorithm of our technique is based on the Enhanced Deduction Algorithm, which processes the real response of the CUT to the applied test T to deduce the values of the internal lines. An experimental version of the algorithm has been implemented in C++. The code takes about 7592 lines. The internal values are determined based on the logic values under the permanent stuck-fault model. Using a backtracking strategy guarantees that the actual values are covered by at least one solution, or no solution is found.Keywords: enhanced deduction algorithm, backtracking strategy, automatic test equipment, verfication
Procedia PDF Downloads 1187672 Comparative Study of Various Treatment Positioning Technique: A Site Specific Study-CA. Breast
Authors: Kamal Kaushik, Dandpani Epili, Ajay G. V., Ashutosh, S. Pradhaan
Abstract:
Introduction: Radiation therapy has come a long way over a period of decades, from 2-dimensional radiotherapy to intensity-modulated radiation therapy (IMRT) or VMAT. For advanced radiation therapy, we need better patient position reproducibility to deliver precise and quality treatment, which raises the need for better image guidance technologies for precise patient positioning. This study presents a two tattoo simulation with roll correction technique which is comparable to other advanced patient positioning techniques. Objective: This is a site-specific study is aimed to perform a comparison between various treatment positioning techniques used for the treatment of patients of Ca- Breast undergoing radiotherapy. In this study, we are comparing 5 different positioning methods used for the treatment of ca-breast, namely i) Vacloc with 3 tattoos, ii) Breast board with three tattoos, iii) Thermoplastic cast with three fiducials, iv) Breast board with a thermoplastic mask with 3 tattoo, v) Breast board with 2 tattoos – A roll correction method. Methods and material: All in one (AIO) solution immobilization was used in all patient positioning techniques for immobilization. The process of two tattoo simulations includes positioning of the patient with the help of a thoracic-abdomen wedge, armrest & knee rest. After proper patient positioning, we mark two tattoos on the treatment side of the patient. After positioning, place fiducials as per the clinical borders markers (1) sternum notch (lower border of clavicle head) (2) 2 cm below from contralateral breast (3) midline between 1 & 2 markers (4) mid axillary on the same axis of 3 markers (Marker 3 & 4 should be on the same axis). During plan implementation, a roll depth correction is applied as per the anterior and lateral positioning tattoos, followed by the shifts required for the Isocentre position. The shifts are then verified by SSD on the patient surface followed by radiographic verification using Cone Beam Computed Tomography (CBCT). Results: When all the five positioning techniques were compared all together, the produced shifts in Vertical, Longitudinal and lateral directions are as follows. The observations clearly suggest that the Longitudinal average shifts in two tattoo roll correction techniques are less than every other patient positioning technique. Vertical and lateral Shifts are also comparable to other modern positioning techniques. Concluded: The two tattoo simulation with roll correction technique provides us better patient setup with a technique that can be implemented easily in most of the radiotherapy centers across the developing nations where 3D verification techniques are not available along with delivery units as the shifts observed are quite minimal and are comparable to those with Vacloc and modern amenities.Keywords: Ca. breast, breast board, roll correction technique, CBCT
Procedia PDF Downloads 1347671 Bundle Block Detection Using Spectral Coherence and Levenberg Marquardt Neural Network
Authors: K. Padmavathi, K. Sri Ramakrishna
Abstract:
This study describes a procedure for the detection of Left and Right Bundle Branch Block (LBBB and RBBB) ECG patterns using spectral Coherence(SC) technique and LM Neural Network. The Coherence function finds common frequencies between two signals and evaluate the similarity of the two signals. The QT variations of Bundle Blocks are observed in lead V1 of ECG. Spectral Coherence technique uses Welch method for calculating PSD. For the detection of normal and Bundle block beats, SC output values are given as the input features for the LMNN classifier. Overall accuracy of LMNN classifier is 99.5 percent. The data was collected from MIT-BIH Arrhythmia database.Keywords: bundle block, SC, LMNN classifier, welch method, PSD, MIT-BIH, arrhythmia database
Procedia PDF Downloads 2807670 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy
Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan
Abstract:
For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue
Procedia PDF Downloads 3617669 Mechanical Behavior of Corroded RC Beams Strengthened by NSM CFRP Rods
Authors: Belal Almassri, Amjad Kreit, Firas Al Mahmoud, Raoul François
Abstract:
Corrosion of steel in reinforced concrete leads to several major defects. Firstly, a reduction in the crosssectional area of the reinforcement and in its ductility results in premature bar failure. Secondly, the expansion of the corrosion products causes concrete cracking and steel–concrete bond deterioration and also affects the bending stiffness of the reinforced concrete members, causing a reduction in the overall load-bearing capacity of the reinforced concrete beams. This paper investigates the validity of a repair technique using Near Surface Mounted (NSM) carbon-fibre-reinforced polymer (CFRP) rods to restore the mechanical performance of corrosion-damaged RC beams. In the NSM technique, the CFRP rods are placed inside pre-cut grooves and are bonded to the concrete with epoxy adhesive. Experimental results were obtained on two beams: a corroded beam that had been exposed to natural corrosion for 25 years and a control beam, (both are 3 m long) repaired in bending only. Each beam was repaired with one 6-mm-diameter NSM CFRP rod. The beams were tested in a three-point bending test up to failure. Overall stiffness and crack maps were studied before and after the repair. Ultimate capacity, ductility and failure mode were also reviewed. Finally some comparisons were made between repaired and non-repaired beams in order to assess the effectiveness of the NSM technique. The experimental results showed that the NSM technique improved the overall characteristics (ultimate load capacity and stiffness) of the control and corroded beams and allowed sufficient ductility to be restored to the repaired corroded elements, thus restoring the safety margin, despite the non-classical mode of failure that occurred in the corroded beam, with the separation of the concrete cover due to corrosion products.Keywords: carbon fibre, corrosion, strength, mechanical testing
Procedia PDF Downloads 4477668 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 1037667 Settlement of the Foundation on the Improved Soil: A Case Study
Authors: Morteza Karami, Soheila Dayani
Abstract:
Deep Soil Mixing (DSM) is a soil improvement technique that involves mechanically mixing the soil with a binder material to improve its strength, stiffness, and durability. This technique is typically used in geotechnical engineering applications where weak or unstable soil conditions exist, such as in building foundations, embankment support, or ground improvement projects. In this study, the settlement of the foundation on the improved soil using the wet DSM technique has been analyzed for a case study. Before DSM production, the initial soil mixture has been determined based on the laboratory tests and then, the proper mix designs have been optimized based on the pilot scale tests. The results show that the spacing and depth of the DSM columns depend on the soil properties, the intended loading conditions, and other factors such as the available space and equipment limitations. Moreover, monitoring instruments installed in the pilot area verify that the settlement of the foundation has been placed in an acceptable range to ensure that the soil mixture is providing the required strength and stiffness to support the structure or load. As an important result, if the DSM columns touch or penetrate into the stiff soil layer, the settlement of the foundation can be significantly decreased. Furthermore, the DSM columns should be allowed to cure sufficiently before placing any significant loads on the structure to prevent excessive deformation or settlement.Keywords: deep soil mixing, soil mixture, settlement, instrumentation, curing age
Procedia PDF Downloads 817666 Study of Intergranular Corrosion in Austenitic Stainless Steels Using Electrochemical Impedance Spectroscopy
Authors: Satish Kolli, Adriana Ferancova, David Porter, Jukka Kömi
Abstract:
Electrochemical impedance spectroscopy (EIS) has been used to detect sensitization in austenitic stainless steels that are heat treated in the temperature regime 600-820 °C to produce different degrees of sensitization in the material. The tests were conducted at five different DC potentials in the transpassive region. The quantitative determination of degree of sensitization has been done using double loop electrochemical potentiokinetic reactivation tests (DL-EPR). The correlation between EIS Nyquist diagrams and DL-EPR degree of sensitization values has been studied. The EIS technique can be used as a qualitative tool in determining the intergranular corrosion in austenitic stainless steels that are heat treated at a given temperature.Keywords: electrochemical technique, intergranular corrosion, sensitization, stainless steels
Procedia PDF Downloads 1797665 Dynamic Analysis of Functionally Graded Nano Composite Pipe with PZT Layers Subjected to Moving Load
Authors: Morteza Raminnia
Abstract:
In this study, dynamic analysis of functionally graded nano-composite pipe reinforced by single-walled carbon nano-tubes (SWCNTs) with simply supported boundary condition subjected to moving mechanical loads is investigated. The material properties of functionally graded carbon nano tube-reinforced composites (FG-CNTRCs) are assumed to be graded in the thickness direction and are estimated through a micro-mechanical model. In this paper polymeric matrix considered as isotropic material and for the CNTRC, uniform distribution (UD) and three types of FG distribution patterns of SWCNT reinforcements are considered. The system equation of motion is derived by using Hamilton's principle under the assumptions of first order shear deformation theory (FSDT).The thin piezoelectric layers embedded on inner and outer surfaces of FG-CNTRC layer are acted as distributed sensor and actuator to control dynamic characteristics of the FG-CNTRC laminated pipe. The modal analysis technique and Newmark's integration method are used to calculate the displacement and dynamic stress of the pipe subjected to moving loads. The effects of various material distribution and velocity of moving loads on dynamic behavior of the pipe is presented. This present approach is validated by comparing the numerical results with the published numerical results in literature. The results show that the above-mentioned effects play very important role on dynamic behavior of the pipe .This present work shows that some meaningful results that which are interest to scientific and engineering community in the field of FGM nano-structures.Keywords: nano-composite, functionally garded material, moving load, active control, PZT layers
Procedia PDF Downloads 4197664 Hybrid Genetic Approach for Solving Economic Dispatch Problems with Valve-Point Effect
Authors: Mohamed I. Mahrous, Mohamed G. Ashmawy
Abstract:
Hybrid genetic algorithm (HGA) is proposed in this paper to determine the economic scheduling of electric power generation over a fixed time period under various system and operational constraints. The proposed technique can outperform conventional genetic algorithms (CGAs) in the sense that HGA make it possible to improve both the quality of the solution and reduce the computing expenses. In contrast, any carefully designed GA is only able to balance the exploration and the exploitation of the search effort, which means that an increase in the accuracy of a solution can only occure at the sacrifice of convergent speed, and vice visa. It is unlikely that both of them can be improved simultaneously. The proposed hybrid scheme is developed in such a way that a simple GA is acting as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method (pattern search technique) is next employed to do the fine tuning. The aim of the strategy is to achieve the cost reduction within a reasonable computing time. The effectiveness of the proposed hybrid technique is verified on two real public electricity supply systems with 13 and 40 generator units respectively. The simulation results obtained with the HGA for the two real systems are very encouraging with regard to the computational expenses and the cost reduction of power generation.Keywords: genetic algorithms, economic dispatch, pattern search
Procedia PDF Downloads 4437663 ACO-TS: an ACO-based Algorithm for Optimizing Cloud Task Scheduling
Authors: Fahad Y. Al-dawish
Abstract:
The current trend by a large number of organizations and individuals to use cloud computing. Many consider it a significant shift in the field of computing. Cloud computing are distributed and parallel systems consisting of a collection of interconnected physical and virtual machines. With increasing request and profit of cloud computing infrastructure, diverse computing processes can be executed on cloud environment. Many organizations and individuals around the world depend on the cloud computing environments infrastructure to carry their applications, platform, and infrastructure. One of the major and essential issues in this environment related to allocating incoming tasks to suitable virtual machine (cloud task scheduling). Cloud task scheduling is classified as optimization problem, and there are several meta-heuristic algorithms have been anticipated to solve and optimize this problem. Good task scheduler should execute its scheduling technique on altering environment and the types of incoming task set. In this research project a cloud task scheduling methodology based on ant colony optimization ACO algorithm, we call it ACO-TS Ant Colony Optimization for Task Scheduling has been proposed and compared with different scheduling algorithms (Random, First Come First Serve FCFS, and Fastest Processor to the Largest Task First FPLTF). Ant Colony Optimization (ACO) is random optimization search method that will be used for assigning incoming tasks to available virtual machines VMs. The main role of proposed algorithm is to minimizing the makespan of certain tasks set and maximizing resource utilization by balance the load among virtual machines. The proposed scheduling algorithm was evaluated by using Cloudsim toolkit framework. Finally after analyzing and evaluating the performance of experimental results we find that the proposed algorithm ACO-TS perform better than Random, FCFS, and FPLTF algorithms in each of the makespaan and resource utilization.Keywords: cloud Task scheduling, ant colony optimization (ACO), cloudsim, cloud computing
Procedia PDF Downloads 4207662 Cognitive Methods for Detecting Deception During the Criminal Investigation Process
Authors: Laid Fekih
Abstract:
Background: It is difficult to detect lying, deception, and misrepresentation just by looking at verbal or non-verbal expression during the criminal investigation process, as there is a common belief that it is possible to tell whether a person is lying or telling the truth just by looking at the way they act or behave. The process of detecting lies and deception during the criminal investigation process needs more studies and research to overcome the difficulties facing the investigators. Method: The present study aimed to identify the effectiveness of cognitive methods and techniques in detecting deception during the criminal investigation. It adopted the quasi-experimental method and covered a sample of (20) defendants distributed randomly into two homogeneous groups, an experimental group of (10) defendants be subject to criminal investigation by applying cognitive techniques to detect deception and a second experimental group of (10) defendants be subject to the direct investigation method. The tool that used is a guided interview based on models of investigative questions according to the cognitive deception detection approach, which consists of three techniques of Vrij: imposing the cognitive burden, encouragement to provide more information, and ask unexpected questions, and the Direct Investigation Method. Results: Results revealed a significant difference between the two groups in term of lie detection accuracy in favour of defendants be subject to criminal investigation by applying cognitive techniques, the cognitive deception detection approach produced superior total accuracy rates both with human observers and through an analysis of objective criteria. The cognitive deception detection approach produced superior accuracy results in truth detection: 71%, deception detection: 70% compared to a direct investigation method truth detection: 52%; deception detection: 49%. Conclusion: The study recommended if practitioners use a cognitive deception detection technique, they will correctly classify more individuals than when they use a direct investigation method.Keywords: the cognitive lie detection approach, deception, criminal investigation, mental health
Procedia PDF Downloads 657661 Numerical Analysis of Cold-Formed Steel Shear Wall Panels Subjected to Cyclic Loading
Authors: H. Meddah, M. Berediaf-Bourahla, B. El-Djouzi, N. Bourahla
Abstract:
Shear walls made of cold formed steel are used as lateral force resisting components in residential and low-rise commercial and industrial constructions. The seismic design analysis of such structures is often complex due to the slenderness of members and their instability prevalence. In this context, a simplified modeling technique across the panel is proposed by using the finite element method. The approach is based on idealizing the whole panel by a nonlinear shear link element which reflects its shear behavior connected to rigid body elements which transmit the forces to the end elements (studs) that resist the tension and the compression. The numerical model of the shear wall panel was subjected to cyclic loads in order to evaluate the seismic performance of the structure in terms of lateral displacement and energy dissipation capacity. In order to validate this model, the numerical results were compared with those from literature tests. This modeling technique is particularly useful for the design of cold formed steel structures where the shear forces in each panel and the axial forces in the studs can be obtained using spectrum analysis.Keywords: cold-formed steel, cyclic loading, modeling technique, nonlinear analysis, shear wall panel
Procedia PDF Downloads 2897660 The Environmental Effects of the Flood Disaster in Anambra State
Authors: U. V. Okpala
Abstract:
Flood is an overflow of water that submerges or ‘drowns’ land. In developing countries it occurs as a result of blocking of natural and man-made drainages and poor maintenance of water dams/reservoirs which seldom give way after persistent heavy down pours. In coastal lowlands and swamp lands, flooding is aided mainly by blocked channels and indiscriminate sand fling of coastal swamp areas and natural drainage channel for urban development/constructions. In this paper, the causes of flood and possible scientific, technological, political, economic and social impacts of flood disaster on the environment a case study of Anambra State have been studied. Often times flooding is caused by climate change, especially in the developed economy where scientific mitigating options are highly employed. Researchers have identified Green Houses Gases (GHG) as the cause of global climate change. The recent flood disaster in Anambra State which caused physical damage to structures, social dislocation, contamination of clean drinking water, spread of water-borne diseases, shortage of crops and food supplies, death of non-tolerant tree species, disruption in transportation system, serious economic loss and psychological trauma is a function of climate change. There is need to encourage generation of renewable energy sources, use of less carbon intensive fuels and other energy efficient sources. Carbon capture/sequestration, proper management of our drainage systems and good maintenance of our dams are good option towards saving the environment.Keywords: flooding, climate change, carbon capture, energy systems
Procedia PDF Downloads 3747659 Resolution of Artificial Intelligence Language Translation Technique Alongside Microsoft Office Presentation during Classroom Teaching: A Case of Kampala International University in Tanzania
Authors: Abigaba Sophia
Abstract:
Artificial intelligence (AI) has transformed the education sector by revolutionizing educational frameworks by providing new opportunities and innovative advanced platforms for language translation during the teaching and learning process. In today's education sector, the primary key to scholarly communication is language; therefore, translation between different languages becomes vital in the process of communication. KIU-T being an International University, admits students from different nations speaking different languages, and English is the official language; some students find it hard to grasp a word during teaching and learning. This paper explores the practical aspect of using artificial intelligence technologies in an advanced language translation manner during teaching and learning. The impact of this technology is reflected in the education strategies to equip students with the necessary knowledge and skills for professional activity in the best way they understand. The researcher evaluated the demand for this practice since students have to apply the knowledge they acquire in their native language to their countries in the best way they understand. The main objective is to improve student's language competence and lay a solid foundation for their future professional development. A descriptive-analytic approach was deemed best for the study to investigate the phenomena of language translation intelligence alongside Microsoft Office during the teaching and learning process. The study analysed the responses of 345 students from different academic programs. Based on the findings, the researcher recommends using the artificial intelligence language translation technique during teaching, and this requires the wisdom of human content designers and educational experts. Lecturers and students will be trained in the basic knowledge of this technique to improve the effectiveness of teaching and learning to meet the student’s needs.Keywords: artificial intelligence, language translation technique, teaching and learning process, Microsoft Office
Procedia PDF Downloads 787658 Improving Junior Doctor Induction Through the Use of Simple In-House Mobile Application
Authors: Dmitriy Chernov, Maria Karavassilis, Suhyoun Youn, Amna Izhar, Devasenan Devendra
Abstract:
Introduction and Background: A well-structured and comprehensive departmental induction improves patient safety and job satisfaction amongst doctors. The aims of our Project were as follows: 1. Assess the perceived preparedness of junior doctors starting their rotation in Acute Medicine at Watford General Hospital. 2. Develop a supplemental Induction Guide and Pocket reference in the form of an iOS mobile application. 3. To collect feedback after implementing the mobile application following a trial period of 8 weeks with a small cohort of junior doctors. Materials and Methods: A questionnaire was distributed to all new junior trainees starting in the department of Acute Medicine to assess their experience of current induction. A mobile Induction application was developed and trialled over a period of 8 weeks, distributed in addition to the existing didactic induction session. After the trial period, the same questionnaire was distributed to assess improvement in induction experience. Analytics data were collected with users’ consent to gauge user engagement and identify areas of improvement of the application. A feedback survey about the app was also distributed. Results: A total of 32 doctors used the application during the 8-week trial period. The application was accessed 7259 times in total, with the average user spending a cumulative of 37 minutes 22 seconds on the app. The most used section was Clinical Guidelines, accessed 1490 times. The App Feedback survey revealed positive reviews: 100% of participants (n=15/15) responded that the app improved their overall induction experience compared to other placements; 93% (n=14/15) responded that the app improved overall efficiency in completing daily ward jobs compared to previous rotations; and 93% (n=14/15) responded that the app improved patient safety overall. In the Pre-App and Post-App Induction Surveys, participants reported: a 48% improvement in awareness of practical aspects of the job; a 26% improvement of awareness on locating pathways and clinical guidelines; a 40% reduction of feelings of overwhelmingness. Conclusions and recommendations: This study demonstrates the importance of technology in Medical Education and Clinical Induction. The mobile application average engagement time equates to over 20 cumulative hours of on-the-job training delivered to each user, within an 8-week period. The most used and referred to section was clinical guidelines. This shows that there is high demand for an accessible pocket guide for this type of material. This simple mobile application resulted in a significant improvement in feedback about induction in our Department of Acute Medicine, and will likely impact workplace satisfaction. Limitations of the application include: post-app surveys had a small number of participants; the app is currently only available for iPhone users; some useful sections are nested deep within the app, lacks deep search functionality across all sections; lacks real time user feedback; and requires regular review and updates. Future steps for the app include: developing a web app, with an admin dashboard to simplify uploading and editing content; a comprehensive search functionality; and a user feedback and peer ratings system.Keywords: mobile app, doctor induction, medical education, acute medicine
Procedia PDF Downloads 857657 A Blind Three-Dimensional Meshes Watermarking Using the Interquartile Range
Authors: Emad E. Abdallah, Alaa E. Abdallah, Bajes Y. Alskarnah
Abstract:
We introduce a robust three-dimensional watermarking algorithm for copyright protection and indexing. The basic idea behind our technique is to measure the interquartile range or the spread of the 3D model vertices. The algorithm starts by converting all the vertices to spherical coordinate followed by partitioning them into small groups. The proposed algorithm is slightly altering the interquartile range distribution of the small groups based on predefined watermark. The experimental results on several 3D meshes prove perceptual invisibility and the robustness of the proposed technique against the most common attacks including compression, noise, smoothing, scaling, rotation as well as combinations of these attacks.Keywords: watermarking, three-dimensional models, perceptual invisibility, interquartile range, 3D attacks
Procedia PDF Downloads 4727656 Application of Regularized Spatio-Temporal Models to the Analysis of Remote Sensing Data
Authors: Salihah Alghamdi, Surajit Ray
Abstract:
Space-time data can be observed over irregularly shaped manifolds, which might have complex boundaries or interior gaps. Most of the existing methods do not consider the shape of the data, and as a result, it is difficult to model irregularly shaped data accommodating the complex domain. We used a method that can deal with space-time data that are distributed over non-planner shaped regions. The method is based on partial differential equations and finite element analysis. The model can be estimated using a penalized least squares approach with a regularization term that controls the over-fitting. The model is regularized using two roughness penalties, which consider the spatial and temporal regularities separately. The integrated square of the second derivative of the basis function is used as temporal penalty. While the spatial penalty consists of the integrated square of Laplace operator, which is integrated exclusively over the domain of interest that is determined using finite element technique. In this paper, we applied a spatio-temporal regression model with partial differential equations regularization (ST-PDE) approach to analyze a remote sensing data measuring the greenness of vegetation, measure by an index called enhanced vegetation index (EVI). The EVI data consist of measurements that take values between -1 and 1 reflecting the level of greenness of some region over a period of time. We applied (ST-PDE) approach to irregular shaped region of the EVI data. The approach efficiently accommodates the irregular shaped regions taking into account the complex boundaries rather than smoothing across the boundaries. Furthermore, the approach succeeds in capturing the temporal variation in the data.Keywords: irregularly shaped domain, partial differential equations, finite element analysis, complex boundray
Procedia PDF Downloads 139