Search results for: testing methods.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4854

Search results for: testing methods.

654 Feature Based Dense Stereo Matching using Dynamic Programming and Color

Authors: Hajar Sadeghi, Payman Moallem, S. Amirhassn Monadjemi

Abstract:

This paper presents a new feature based dense stereo matching algorithm to obtain the dense disparity map via dynamic programming. After extraction of some proper features, we use some matching constraints such as epipolar line, disparity limit, ordering and limit of directional derivative of disparity as well. Also, a coarseto- fine multiresolution strategy is used to decrease the search space and therefore increase the accuracy and processing speed. The proposed method links the detected feature points into the chains and compares some of the feature points from different chains, to increase the matching speed. We also employ color stereo matching to increase the accuracy of the algorithm. Then after feature matching, we use the dynamic programming to obtain the dense disparity map. It differs from the classical DP methods in the stereo vision, since it employs sparse disparity map obtained from the feature based matching stage. The DP is also performed further on a scan line, between any matched two feature points on that scan line. Thus our algorithm is truly an optimization method. Our algorithm offers a good trade off in terms of accuracy and computational efficiency. Regarding the results of our experiments, the proposed algorithm increases the accuracy from 20 to 70%, and reduces the running time of the algorithm almost 70%.

Keywords: Chain Correspondence, Color Stereo Matching, Dynamic Programming, Epipolar Line, Stereo Vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2348
653 Turbulent Mixing and its Effects on Thermal Fatigue in Nuclear Reactors

Authors: Eggertson, E.C. Kapulla, R, Fokken, J, Prasser, H.M.

Abstract:

The turbulent mixing of coolant streams of different temperature and density can cause severe temperature fluctuations in piping systems in nuclear reactors. In certain periodic contraction cycles these conditions lead to thermal fatigue. The resulting aging effect prompts investigation in how the mixing of flows over a sharp temperature/density interface evolves. To study the fundamental turbulent mixing phenomena in the presence of density gradients, isokinetic (shear-free) mixing experiments are performed in a square channel with Reynolds numbers ranging from 2-500 to 60-000. Sucrose is used to create the density difference. A Wire Mesh Sensor (WMS) is used to determine the concentration map of the flow in the cross section. The mean interface width as a function of velocity, density difference and distance from the mixing point are analyzed based on traditional methods chosen for the purposes of atmospheric/oceanic stratification analyses. A definition of the mixing layer thickness more appropriate to thermal fatigue and based on mixedness is devised. This definition shows that the thermal fatigue risk assessed using simple mixing layer growth can be misleading and why an approach that separates the effects of large scale (turbulent) and small scale (molecular) mixing is necessary.

Keywords: Concentration measurements, Mixedness, Stablystratified turbulent isokinetic mixing layer, Wire mesh sensor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2244
652 Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks

Authors: Yao-Hong Tsai

Abstract:

Due to the sensor technology, video surveillance has become the main way for security control in every big city in the world. Surveillance is usually used by governments for intelligence gathering, the prevention of crime, the protection of a process, person, group or object, or the investigation of crime. Many surveillance systems based on computer vision technology have been developed in recent years. Moving target tracking is the most common task for Unmanned Aerial Vehicle (UAV) to find and track objects of interest in mobile aerial surveillance for civilian applications. The paper is focused on vision-based collision avoidance for UAVs by recurrent neural networks. First, images from cameras on UAV were fused based on deep convolutional neural network. Then, a recurrent neural network was constructed to obtain high-level image features for object tracking and extracting low-level image features for noise reducing. The system distributed the calculation of the whole system to local and cloud platform to efficiently perform object detection, tracking and collision avoidance based on multiple UAVs. The experiments on several challenging datasets showed that the proposed algorithm outperforms the state-of-the-art methods.

Keywords: Unmanned aerial vehicle, object tracking, deep learning, collision avoidance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 952
651 Evolutionary Techniques for Model Order Reduction of Large Scale Linear Systems

Authors: S. Panda, J. S. Yadav, N. P. Patidar, C. Ardil

Abstract:

Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. The GA has been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly non-linear, mixed integer optimization problems that are typical of complex engineering systems. PSO technique is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. In this paper both PSO and GA optimization are employed for finding stable reduced order models of single-input- single-output large-scale linear systems. Both the techniques guarantee stability of reduced order model if the original high order model is stable. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example from literature and the results are compared with recently published conventional model reduction technique.

Keywords: Genetic Algorithm, Particle Swarm Optimization, Order Reduction, Stability, Transfer Function, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2721
650 Comparison of Router Intelligent and Cooperative Host Intelligent Algorithms in a Continuous Model of Fixed Telecommunication Networks

Authors: Dávid Csercsik, Sándor Imre

Abstract:

The performance of state of the art worldwide telecommunication networks strongly depends on the efficiency of the applied routing mechanism. Game theoretical approaches to this problem offer new solutions. In this paper a new continuous network routing model is defined to describe data transfer in fixed telecommunication networks of multiple hosts. The nodes of the network correspond to routers whose latency is assumed to be traffic dependent. We propose that the whole traffic of the network can be decomposed to a finite number of tasks, which belong to various hosts. To describe the different latency-sensitivity, utility functions are defined for each task. The model is used to compare router and host intelligent types of routing methods, corresponding to various data transfer protocols. We analyze host intelligent routing as a transferable utility cooperative game with externalities. The main aim of the paper is to provide a framework in which the efficiency of various routing algorithms can be compared and the transferable utility game arising in the cooperative case can be analyzed.

Keywords: Routing, Telecommunication networks, Performance evaluation, Cooperative game theory, Partition function form games

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
649 Patients’ Perceptions of Receiving a Diagnosis of a Hematological Malignancy, Following the SPIKES Protocol

Authors: L. Dixon, D. Gavani

Abstract:

Objective: Sharing devastating news with patients is often considered the most difficult task of doctors. This study aimed to explore patients’ perceptions of receiving bad news including which features improve the experience and which areas need refining. Methods: A questionnaire was written based on the steps of the SPIKES model for breaking bad new. 20 patients receiving treatment for a hematological malignancy completed the questionnaire. Results: Overall, the results are promising as most patients praised their consultation. ‘Poor’ was more commonly rated by women and participants aged 45-64. The main differences between the ‘excellent’ and ‘poor’ consultations include the doctor’s sensitivity and checking the patients’ understanding. Only 35% of patients were asked their existing knowledge and 85% of consultations failed to discuss the impact of the diagnosis on daily life. Conclusion: This study agreed with the consensus of existing literature. The commended aspects include consultation set-up and information given. Areas patients felt needed improvement include doctors determining the patient’s existing knowledge and checking new information has been understood. Doctors should also explore how the diagnosis will affect the patient’s life. With a poorer prognosis, doctors should work on conveying appropriate hope. The study was limited by a small sample size and potential recall bias.

Keywords: Communication, diagnosis, hematology, patients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1944
648 Analysis of Attention to the Confucius Institute from Domestic and Foreign Mainstream Media

Authors: Wei Yang, Xiaohui Cui, Weiping Zhu, Liqun Liu

Abstract:

The rapid development of the Confucius Institute is attracting more and more attention from mainstream media around the world. Mainstream media plays a large role in public information dissemination and public opinion. This study presents efforts to analyze the correlation and functional relationship between domestic and foreign mainstream media by analyzing the amount of reports on the Confucius Institute. Three kinds of correlation calculation methods, the Pearson correlation coefficient (PCC), the Spearman correlation coefficient (SCC), and the Kendall rank correlation coefficient (KCC), were applied to analyze the correlations among mainstream media from three regions: mainland of China; Hong Kong and Macao (the two special administration regions of China denoted as SARs); and overseas countries excluding China, such as the United States, England, and Canada. Further, the paper measures the functional relationships among the regions using a regression model. The experimental analyses found high correlations among mainstream media from the different regions. Additionally, we found that there is a linear relationship between the mainstream media of overseas countries and those of the SARs by analyzing the amount of reports on the Confucius Institute based on a data set obtained by crawling the websites of 106 mainstream media during the years 2004 to 2014.

Keywords: Confucius Institute, correlation analysis, mainstream media, regression model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1359
647 Creating a Profound Sense of Comfort to Stimulate Workers’ Innovation and Productivity: Exploring Research and Case Study Applications

Authors: Rana Bazaid, Debajyoti Pati

Abstract:

Purpose: The aim of this research is to explore and discuss innovation-workspaces, and how the design of the workspace has the potential to boost the work process and encourage employees’ satisfaction, leading to inventive and creative results. Background: The relationship between the workers and the work environment has a strong potential to enhance work outcomes when optimized for work goals. Innovation-work environment can benefit employees’ satisfaction, health, and performance. To understand this complex relationship, this research explores innovation-work environments. Methods: A review of 26 peer-reviewed articles, seven books, and 23 companies’ websites was conducted; in addition, five case studies were analyzed to deduce appropriate examples for the study. Results: The research found all successful five innovation environments focused on two aspects: first, workers’ satisfaction and comfort, which includes a focus on physical, functional, and psychological comfort; second aspect, all five centers were diverse work environments that addressed workers’ needs, design for individuals and teamwork, design for workers’ freedom, and design for increasing interaction. Conclusion: understanding individuals' needs and creating work environments that enhance interaction between workers and with the space are key aspects of successful innovation-work environments.

Keywords: Innovation-workspace, productivity, work environment, workers’ satisfaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 595
646 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — In the Case of Critical Dataset Size —

Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno

Abstract:

STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to real-world data

Keywords: Rule induction, decision table, missing data, noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1462
645 Assessment of the Efficiency of Virtual Orthodontic Consultations during COVID-19

Authors: R. Litt, A. Brown

Abstract:

Aims: We aimed to assess the efficiency of ‘Attend Anywhere’ orthodontic clinics within a district general hospital during COVID- 19. Our secondary aim was to pilot a questionnaire to assess patient satisfaction with virtual orthodontic appointments. Design: The study design is a service evaluation including pilot questionnaire. Methods: The average number of patients seen per virtual clinic and the number of patients failing to attend was compared to face-to-face clinics. The capability of virtual appointments to be successful in preventing the need for a face-to-face appointment was assessed. Patients were invited to complete a telephone pilot questionnaire focusing on patient satisfaction and accessibility. Results: There was a small increase in the number of patients failing to attend virtual appointments, with a third of the patients who did not attend failing to receive the appointment link. 81.9% of virtual clinic appointments were successful and prevented the need for a face-to-face appointment. Overall patients were very satisfied with their virtual orthodontic appointment and the majority required no assistance to access the service. Conclusions: The use of ‘Attend Anywhere’ clinics in orthodontics offers patients and clinicians an effective and efficient alternative to face-to-face appointments that patients on average find easy to use and completely satisfactory.

Keywords: Clinics, COVID-19, orthodontics, patient satisfaction, virtual.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 637
644 Evaluation of the Internal Quality for Pineapple Based on the Spectroscopy Approach and Neural Network

Authors: Nonlapun Meenil, Pisitpong Intarapong, Thitima Wongsheree, Pranchalee Samanpiboon

Abstract:

In Thailand, once pineapples are harvested, they must be classified into two classes based on their sweetness: sweet and unsweet. This paper has studied and developed the assessment of internal quality of pineapples using a low-cost compact spectroscopy sensor according to the spectroscopy approach and Neural Network (NN). During the experiments, Batavia pineapples were utilized, generating 100 samples. The extracted pineapple juice of each sample was used to determine the Soluble Solid Content (SSC) labeling into sweet and unsweet classes. In terms of experimental equipment, the sensor cover was specifically designed to install the sensor and light source to read the reflectance at a five mm depth from pineapple flesh. By using a spectroscopy sensor, data on visible and near-infrared reflectance (Vis-NIR) were collected. The NN was used to classify the pineapple classes. Before the classification step, the preprocessing methods, which are class balancing, data shuffling, and standardization, were applied. The 510 nm and 900 nm reflectance values of the middle parts of pineapples were used as features of the NN. With the sequential model and ReLU activation function, 100% accuracy of the training set and 76.67% accuracy of the test set were achieved. According to the abovementioned information, using a low-cost compact spectroscopy sensor has achieved favorable results in classifying the sweetness of the two classes of pineapples.

Keywords: Spectroscopy, soluble solid content, pineapple, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 119
643 A Test Methodology to Measure the Open-Loop Voltage Gain of an Operational Amplifier

Authors: Maninder Kaur Gill, Alpana Agarwal

Abstract:

It is practically not feasible to measure the open-loop voltage gain of the operational amplifier in the open loop configuration. It is because the open-loop voltage gain of the operational amplifier is very large. In order to avoid the saturation of the output voltage, a very small input should be given to operational amplifier which is not possible to be measured practically by a digital multimeter. A test circuit for measurement of open loop voltage gain of an operational amplifier has been proposed and verified using simulation tools as well as by experimental methods on breadboard. The main advantage of this test circuit is that it is simple, fast, accurate, cost effective, and easy to handle even on a breadboard. The test circuit requires only the device under test (DUT) along with resistors. This circuit has been tested for measurement of open loop voltage gain for different operational amplifiers. The underlying goal is to design testable circuits for various analog devices that are simple to realize in VLSI systems, giving accurate results and without changing the characteristics of the original system. The DUTs used are LM741CN and UA741CP. For LM741CN, the simulated gain and experimentally measured gain (average) are calculated as 89.71 dB and 87.71 dB, respectively. For UA741CP, the simulated gain and experimentally measured gain (average) are calculated as 101.15 dB and 105.15 dB, respectively. These values are found to be close to the datasheet values.

Keywords: Device under test, open-loop voltage gain, operational amplifier, test circuit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3333
642 Comparison of Different Advanced Oxidation Processes for Degrading 4-Chlorophenol

Authors: M.D. Murcia, M. Gomez, E. Gomez, J.L. Gomez, N. Christofi

Abstract:

The removal efficiency of 4-chlorophenol with different advanced oxidation processes have been studied. Oxidation experiments were carried out using two 4-chlorophenol concentrations: 100 mg L-1 and 250 mg L-1 and UV generated from a KrCl excilamp with (molar ratio H2O2: 4-chlorophenol = 25:1) and without H2O2, and, with Fenton process (molar ratio H2O2:4- chlorophenol of 25:1 and Fe2+ concentration of 5 mg L-1). The results show that there is no significant difference in the 4- chlorophenol conversion when using one of the three assayed methods. However, significant concentrations of the photoproductos still remained in the media when the chosen treatment involves UV without hydrogen peroxide. Fenton process removed all the intermediate photoproducts except for the hydroquinone and the 1,2,4-trihydroxybenzene. In the case of UV and hydrogen peroxide all the intermediate photoproducts are removed. Microbial bioassays were carried out utilising the naturally luminescent bacterium Vibrio fischeri and a genetically modified Pseudomonas putida isolated from a waste treatment plant receiving phenolic waste. The results using V. fischeri show that with samples after degradation, only the UV treatment showed toxicity (IC50 =38) whereas with H2O2 and Fenton reactions the samples exhibited no toxicity after treatment in the range of concentrations studied. Using the Pseudomonas putida biosensor no toxicity could be detected for all the samples following treatment due to the higher tolerance of the organism to phenol concentrations encountered.

Keywords: 4-chlorophenol, Fenton, photodegradation, UV, excilamp.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1927
641 Effect of Cooling Rate on base Metals Recovery from Copper Matte Smelting Slags

Authors: N. Tshiongo , R K.K. Mbaya , K Maweja, L.C. Tshabalala

Abstract:

Slag sample from copper smelting operation in a water jacket furnace from DRC plant was used. The study intends to determine the effect of cooling in the extraction of base metals. The cooling methods investigated were water quenching, air cooling and furnace cooling. The latter cooling ways were compared to the original as received slag. It was observed that, the cooling rate of the slag affected the leaching of base metals as it changed the phase distribution in the slag and the base metals distribution within the phases. It was also found that fast cooling of slag prevented crystallization and produced an amorphous phase that encloses the base metals. The amorphous slags from the slag dumps were more leachable in acidic medium (HNO3) which leached 46%Cu, 95% Co, 85% Zn, 92% Pb and 79% Fe with no selectivity at pH0, than in basic medium (NH4OH). The leachability was vice versa for the modified slags by quenching in water which leached 89%Cu with a high selectivity as metal extractions are less than 1% for Co, Zn, Pb and Fe at ambient temperature and pH12. For the crystallized slags, leaching of base metals increased with the increase of temperature from ambient temperature to 60°C and decreased at the higher temperature of 80°C due to the evaporation of the ammonia solution used for basic leaching, the total amounts of base metals that were leached in slow cooled slags were very low compared to the quenched slag samples.

Keywords: copper slag, leaching, amorphous, cooling rate

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3765
640 The Optimal Placement of Capacitor in Order to Reduce Losses and the Profile of Distribution Network Voltage with GA, SA

Authors: Limouzade E., Joorabian M.

Abstract:

Most of the losses in a power system relate to the distribution sector which always has been considered. From the important factors which contribute to increase losses in the distribution system is the existence of radioactive flows. The most common way to compensate the radioactive power in the system is the power to use parallel capacitors. In addition to reducing the losses, the advantages of capacitor placement are the reduction of the losses in the release peak of network capacity and improving the voltage profile. The point which should be considered in capacitor placement is the optimal placement and specification of the amount of the capacitor in order to maximize the advantages of capacitor placement. In this paper, a new technique has been offered for the placement and the specification of the amount of the constant capacitors in the radius distribution network on the basis of Genetic Algorithm (GA). The existing optimal methods for capacitor placement are mostly including those which reduce the losses and voltage profile simultaneously. But the retaliation cost and load changes have not been considered as influential UN the target function .In this article, a holistic approach has been considered for the optimal response to this problem which includes all the parameters in the distribution network: The price of the phase voltage and load changes. So, a vast inquiry is required for all the possible responses. So, in this article, we use Genetic Algorithm (GA) as the most powerful method for optimal inquiry.

Keywords: Genetic Algorithm (GA), capacitor placement, voltage profile, network losses, Simulating Annealing (SA), distribution network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1535
639 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742
638 A Trainable Neural Network Ensemble for ECG Beat Classification

Authors: Atena Sajedin, Shokoufeh Zakernejad, Soheil Faridi, Mehrdad Javadi, Reza Ebrahimpour

Abstract:

This paper illustrates the use of a combined neural network model for classification of electrocardiogram (ECG) beats. We present a trainable neural network ensemble approach to develop customized electrocardiogram beat classifier in an effort to further improve the performance of ECG processing and to offer individualized health care. We process a three stage technique for detection of premature ventricular contraction (PVC) from normal beats and other heart diseases. This method includes a denoising, a feature extraction and a classification. At first we investigate the application of stationary wavelet transform (SWT) for noise reduction of the electrocardiogram (ECG) signals. Then feature extraction module extracts 10 ECG morphological features and one timing interval feature. Then a number of multilayer perceptrons (MLPs) neural networks with different topologies are designed. The performance of the different combination methods as well as the efficiency of the whole system is presented. Among them, Stacked Generalization as a proposed trainable combined neural network model possesses the highest recognition rate of around 95%. Therefore, this network proves to be a suitable candidate in ECG signal diagnosis systems. ECG samples attributing to the different ECG beat types were extracted from the MIT-BIH arrhythmia database for the study.

Keywords: ECG beat Classification; Combining Classifiers;Premature Ventricular Contraction (PVC); Multi Layer Perceptrons;Wavelet Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2214
637 Armed Groups and Intra State Conflict: A Study on the Egyptian Case

Authors: Ghzlan Mahmoud Abdel Aziz

Abstract:

This case study aims to identify the intrastate conflicts between the nation state and armed groups. Nowadays, most wars weaken states against armed groups. Thus, it is very important to negotiate with such groups in order to reinforce the law for the protection of victims. These armed groups are the cause of conflicts and they are related with many of humanitarian issues that result out of conflicts. In this age of rivalry; terrorists, insurgents, or transnational criminal parties have surfaced to the top as a reaction to these armed groups in an effort to set up a new world order. Moreover, the intra state conflicts became increasingly treacherous than the interstate conflicts, particularly when nation state systems deal with armed groups which try to influence the state. The unexpected upraising of the Arab Spring during 2011 in parts of the Middle East and North Africa formed various patterns of conflicts. The events of the Arab Spring resulted in current and long term change across the region. Significant modifications in the level, strength and period of armed conflict around the world have been made. Egypt was in the center of these events. It has fought back the armed groups under the name of terrorism and spread common disorder and violence among civilians. On this note, this study focuses on the problem of the transformation in the methods of organized violence within one state rather than between two state or more and analyzes the objectives, strategies, and internal composition of armed groups and the environments that foster them, with a focus on the Egyptian case.

Keywords: Armed groups, conflicts, Egyptian armed forces, intrastate conflicts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1335
636 Fabrication of Nanoporous Template of Aluminum Oxide with High Regularity Using Hard Anodization Method

Authors: Hamed Rezazadeh, Majid Ebrahimzadeh, Mohammad Reza Zeidi Yam

Abstract:

Anodizing is an electrochemical process that converts the metal surface into a decorative, durable, corrosion-resistant, anodic oxide finish. Aluminum is ideally suited to anodizing, although other nonferrous metals, such as magnesium and titanium, also can be anodized. The anodic oxide structure originates from the aluminum substrate and is composed entirely of aluminum oxide. This aluminum oxide is not applied to the surface like paint or plating, but is fully integrated with the underlying aluminum substrate, so cannot chip or peel. It has a highly ordered, porous structure that allows for secondary processes such as coloring and sealing. In this experimental paper, we focus on a reliable method for fabricating nanoporous alumina with high regularity. Starting from study of nanostructure materials synthesize methods. After that, porous alumina fabricate in the laboratory by anodization of aluminum oxide. Hard anodization processes are employed to fabricate the nanoporous alumina using 0.3M oxalic acid and 90, 120 and 140 anodized voltages. The nanoporous templates were characterized by SEM and FFT. The nanoporous templates using 140 voltages have high ordered. The pore formation, influence of the experimental conditions on the pore formation, the structural characteristics of the pore and the oxide chemical reactions involved in the pore growth are discuss.

Keywords: Alumina, Nanoporous Template, Anodization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2864
635 Delineation of Oil – Polluted Sites in Ibeno LGA, Nigeria, Using Geophysical Techniques

Authors: Ime R. Udotong, Justina I. R. Udotong, Ofonime U. M. John

Abstract:

Ibeno, Nigeria hosts the operational base of Mobil Producing Nigeria Unlimited (MPNU), a subsidiary of ExxonMobil and the current highest oil & condensate producer in Nigeria. Besides MPNU, other oil companies operate onshore, on the continental shelf and deep offshore of the Atlantic Ocean in Ibeno, Nigeria. This study was designed to delineate oil polluted sites in Ibeno, Nigeria using geophysical methods of electrical resistivity (ER) and ground penetrating radar (GPR). Results obtained revealed that there have been hydrocarbon contaminations of this environment by past crude oil spills as observed from high resistivity values and GPR profiles which clearly show the distribution, thickness and lateral extent of hydrocarbon contamination as represented on the radargram reflector tones. Contaminations were of varying degrees, ranging from slight to high, indicating levels of substantial attenuation of crude oil contamination over time. Moreover, the display of relatively lower resistivities of locations outside the impacted areas compared to resistivity values within the impacted areas and the 3-D Cartesian images of oil contaminant plume depicted by red, light brown and magenta for high, low and very low oil impacted areas, respectively confirmed significant recent pollution of the study area with crude oil.

Keywords: Electrical resistivity, geophysical investigations, ground penetrating radar, oil-polluted sites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3084
634 Applying Element Free Galerkin Method on Beam and Plate

Authors: Mahdad M’hamed, Belaidi Idir

Abstract:

This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate hole

Keywords: Numerical computation, element-free Galerkin, moving least squares, meshless methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2435
633 Bridging Stress Modeling of Composite Materials Reinforced by Fibers Using Discrete Element Method

Authors: Chong Wang, Kellem M. Soares, Luis E. Kosteski

Abstract:

The problem of toughening in brittle materials reinforced by fibers is complex, involving all of the mechanical properties of fibers, matrix and the fiber/matrix interface, as well as the geometry of the fiber. Development of new numerical methods appropriate to toughening simulation and analysis is necessary. In this work, we have performed simulations and analysis of toughening in brittle matrix reinforced by randomly distributed fibers by means of the discrete elements method. At first, we put forward a mechanical model of toughening contributed by random fibers. Then with a numerical program, we investigated the stress, damage and bridging force in the composite material when a crack appeared in the brittle matrix. From the results obtained, we conclude that: (i) fibers of high strength and low elasticity modulus are beneficial to toughening; (ii) fibers of relatively high elastic modulus compared to the matrix may result in substantial matrix damage due to spalling effect; (iii) employment of high-strength synthetic fibers is a good option for toughening. We expect that the combination of the discrete element method (DEM) with the finite element method (FEM) can increase the versatility and efficiency of the software developed. The present work can guide the design of ceramic composites of high performance through the optimization of the parameters.

Keywords: Bridging stress, discrete element method, fiber reinforced composites, toughening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1898
632 Technology Based Learning Environment and Student Achievement in English as a Foreign Language in Pakistan

Authors: M. Athar Hussain, M. Zafar Iqbal., M. Saeed Akhtar

Abstract:

The fast growing accessibility and capability of emerging technologies have fashioned enormous possibilities of designing, developing and implementing innovative teaching methods in the classroom. The global technological scenario has paved the way to new pedagogies in teaching-learning process focusing on technology based learning environment and its impact on student achievement. The present experimental study was conducted to determine the effectiveness of technology based learning environment on student achievement in English as a foreign language. The sample of the study was 90 students of 10th grade of a public school located in Islamabad. A pretest- posttest equivalent group design was used to compare the achievement of the two groups. A Pretest and A posttest containing 50 items each from English textbook were developed and administered. The collected data were statistically analyzed. The results showed that there was a significant difference between the mean scores of Experimental group and the Control group. The performance of Experimental group was better on posttest scores that indicted that teaching through technology based learning environment enhanced the achievement level of the students. On the basis of the results, it was recommended that teaching and learning through information and communication technologies may be adopted to enhance the language learning capability of the students.

Keywords: English as a Foreign Language, Student Achievement, Technology Based Learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3135
631 Overview of Multi-Chip Alternatives for 2.5D and 3D Integrated Circuit Packagings

Authors: Ching-Feng Chen, Ching-Chih Tsai

Abstract:

With the size of the transistor gradually approaching the physical limit, it challenges the persistence of Moore’s Law due to such issues of the short channel effect and the development of the high numerical aperture (NA) lithography equipment. In the context of the ever-increasing technical requirements of portable devices and high-performance computing (HPC), relying on the law continuation to enhance the chip density will no longer support the prospects of the electronics industry. Weighing the chip’s power consumption-performance-area-cost-cycle time to market (PPACC) is an updated benchmark to drive the evolution of the advanced wafer nanometer (nm). The advent of two and half- and three-dimensional (2.5 and 3D)- Very-Large-Scale Integration (VLSI) packaging based on Through Silicon Via (TSV) technology has updated the traditional die assembly methods and provided the solution. This overview investigates the up-to-date and cutting-edge packaging technologies for 2.5D and 3D integrated circuits (IC) based on the updated transistor structure and technology nodes. We conclude that multi-chip solutions for 2.5D and 3D IC packaging can prolong Moore’s Law.

Keywords: Moore’s Law, High Numerical Aperture, Power Consumption-Performance-Area-Cost-Cycle Time to Market, PPACC, 2.5 and 3D-Very-Large-Scale Integration Packaging, Through Silicon Vi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 226
630 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors

Authors: J. Madureira, R. Lagido, I. Sousa

Abstract:

Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU. 

Keywords: Inertial Measurement Unit (IMU), Global Positioning System (GPS), smartphone, surfing performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
629 Material Analysis for Temple Painting Conservation in Taiwan

Authors: Chen-Fu Wang, Lin-Ya Kung

Abstract:

For traditional painting materials, the artisan used to combine the pigments with different binders to create colors. As time goes by, the materials used for painting evolved from natural to chemical materials. The vast variety of ingredients used in chemical materials has complicated restoration work; it makes conservation work more difficult. Conservation work also becomes harder when the materials cannot be easily identified; therefore, it is essential that we take a more scientific approach to assist in conservation work. Paintings materials are high molecular weight polymer, and their analysis is very complicated as well other contamination such as smoke and dirt can also interfere with the analysis of the material. The current methods of composition analysis of painting materials include Fourier transform infrared spectroscopy (FT-IR), mass spectrometer, Raman spectroscopy, X-ray diffraction spectroscopy (XRD), each of which has its own limitation. In this study, FT-IR was used to analyze the components of the paint coating. We have taken the most commonly seen materials as samples and deteriorated it. The aged information was then used for the database to exam the temple painting materials. By observing the FT-IR changes over time, we can tell all of the painting materials will be deteriorated by the UV light, but only the speed of its degradation had some difference. From the deterioration experiment, the acrylic resin resists better than the others. After collecting the painting materials aging information on FT-IR, we performed some test on the paintings on the temples. It was found that most of the artisan used tune-oil for painting materials, and some other paintings used chemical materials. This method is now working successfully on identifying the painting materials. However, the method is destructive and high cost. In the future, we will work on the how to know the painting materials more efficiently.

Keywords: Temple painting, painting material, conservation, FT-IR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1274
628 Methodology of Personalizing Interior Spaces in Public Libraries

Authors: Baharak Mousapour

Abstract:

Creating public spaces which are tailored for the specific demands of the individuals is one of the challenges for the contemporary interior designers. Improving the general knowledge as well as providing a forum for all walks of life to exploit is one of the objectives of a public library. In this regard, interior design in consistent with the demands of the individuals is of paramount importance. Seemingly, study spaces, in particular, those in close relation to the personalized sector, have proven to be challenging, according to the literature. To address this challenge, attributes of individuals, namely, perception of people from public spaces and their interactions with the so-called spaces, should be analyzed to provide interior designers with something to work on. This paper follows the analytic-descriptive research methodology by outlining case study libraries which have personalized public libraries with the investigation of the type of personalization as its primary objective and (I) recognition of physical schedule and the know-how of the spatial connection in indoor design of a library and (II) analysis of each personalized space in relation to other spaces of the library as its secondary objectives. The significance of the current research lies in the concept of personalization as one of the most recent methods of attracting people to libraries. Previous research exists in this regard, but the lack of data concerning personalization makes this topic worth investigating. Hence, this study aims to put forward approaches through real-case studies for the designers to deal with this concept.

Keywords: interior design, library, library design, personalization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 621
627 Numerical Simulation for a Shallow Braced Excavation of Campus Building

Authors: Sao-Jeng Chao, Wen-Cheng Chen, Wei-Humg Lu

Abstract:

In order to prevent encountering unpredictable factors, geotechnical engineers always conduct numerical analysis for braced excavation design. Simulation work in advance can predict the response of subsequent excavation and thus will be designed to increase the security coefficient of construction. The parameters that are considered include geological conditions, soil properties, soil distributions, loading types, and the analysis and design methods. National Ilan University is located on the LanYang plain, mainly deposited by clayey soil and loose sand, and thus is vulnerable to external influence displacement. National Ilan University experienced a construction of braced excavation with a complete program of monitoring excavation. This study takes advantage of a one-dimensional finite element method RIDO to simulate the excavation process. The predicted results from numerical simulation analysis are compared with the monitored results of construction to explore the differences between them. Numerical simulation analysis of the excavation process can be used to analyze retaining structures for the purpose of understanding the relationship between the displacement and supporting system. The resulting deformation and stress distribution from the braced excavation cab then be understand in advance. The problems can be prevented prior to the construction process, and thus acquire all the affected important factors during design and construction.

Keywords: Excavation, numerical simulation, rido, retaining structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916
626 Forensic Science in Dr. Jekyll and Mr. Hyde: Trails of Utterson's Quest

Authors: Kyu-Jeoung Lee, Jae-Uk Choo

Abstract:

This paper focuses on investigating The Strange Case of Dr Jekyll and Mr Hyde from Utterson’s point of view, referring to: Gabriel John Utterson, a central character in the book. Utterson is no different from a forensic investigator, as he tries to collect evidence on the mysterious Mr. Hyde’s relationship to Dr. Jekyll. From Utterson's perspective, Jekyll is the 'victim' of a potential scandal and blackmail, and Hyde is the 'suspect' of a possible 'crime'. Utterson intends to figure out Hyde's identity, connect his motive with his actions, and gather witness accounts. During Utterson’s quest, the outside materials available to him along with the social backgrounds of Hyde and Jekyll will be analyzed. The archives left from Jekyll’s chamber will also play a part providing evidence. Utterson will investigate, based on what he already knows about Jekyll his whole life, and how Jekyll had acted in his eyes until he was gone, and finding out possible explanations for Jekyll's actions. The relationship between Jekyll and Hyde becomes the major question, as the social background offers clues pointing in the direction of illegitimacy and prostitution. There is still a possibility that Jekyll and Hyde were, in fact, completely different people. Utterson received a full statement and confession from Jekyll himself at the end of the story, which gives the reader the possible truth on what happened. Stevenson’s Dr. Jekyll and Mr. Hyde led readers, as it did Utterson, to find the connection between Hyde and Jekyll using methods of history, culture, and science. Utterson's quest to uncover Hyde shows an example of applying the various fields to in his act to see if Hyde's inheritance was legal. All of this taken together could technically be considered forensic investigation.

Keywords: Dr. Jekyll and Mr. Hyde, forensic investigation, illegitimacy, prostitution, Robert Louis Stevenson.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1826
625 A Bibliometric Assessment on Sustainability and Clustering

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner, David Gabriel F. de Barros

Abstract:

Review researches are useful in terms of analysis of research problems. Between the types of review documents, we commonly find bibliometric studies. This type of application often helps the global visualization of a research problem and helps academics worldwide to understand the context of a research area better. In this document, a bibliometric view surrounding clustering techniques and sustainability problems is presented. The authors aimed at which issues mostly use clustering techniques and even which sustainability issue would be more impactful on today’s moment of research. During the bibliometric analysis, we found 10 different groups of research in clustering applications for sustainability issues: Energy; Environmental; Non-urban Planning; Sustainable Development; Sustainable Supply Chain; Transport; Urban Planning; Water; Waste Disposal; and, Others. Moreover, by analyzing the citations of each group, it was discovered that the Environmental group could be classified as the most impactful research cluster in the area mentioned. After the content analysis of each paper classified in the environmental group, it was found that the k-means technique is preferred for solving sustainability problems with clustering methods since it appeared the most amongst the documents. The authors finally conclude that a bibliometric assessment could help indicate a gap of researches on waste disposal – which was the group with the least amount of publications – and the most impactful research on environmental problems.

Keywords: Bibliometric assessment, clustering, sustainability, territorial partitioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 387