Search results for: methods of information culture of students
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8616

Search results for: methods of information culture of students

576 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body

Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi

Abstract:

The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.

Keywords: Accu-Chek, diabetes, neural network, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
575 Influence of Atmospheric Physical Effects on Static Behavior of Building Plate Components Made of Fiber-Cement-Based Materials

Authors: Jindrich J. Melcher, Marcela Karmazínová

Abstract:

The paper presents the brief information on particular results of experimental study focused to the problems of behavior of structural plated components made of fiber-cement-based materials and used in building constructions, exposed to atmospheric physical effects given by the weather changes in the summer period. Weather changes represented namely by temperature and rain cause also the changes of the temperature and moisture of the investigated structural components. This can affect their static behavior that means stresses and deformations, which have been monitored as the main outputs of tests performed. Experimental verification is based on the simulation of the influence of temperature and rain using the defined procedure of warming and water sprinkling with respect to the corresponding weather conditions during summer period in the South Moravian region at the Czech Republic, for which the application of these structural components is mainly planned. Two types of components have been tested: (i) glass-fiber-concrete panels used for building façades and (ii) fiber-cement slabs used mainly for claddings, but also as a part of floor structures or lost shuttering, and so on.

Keywords: Atmospheric physical effect, building component, experiment, fiber-cement, glass-fiber-concrete, simulation, static behavior, test, warming, water sprinkling, weather.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1215
574 Face Recognition Using Double Dimension Reduction

Authors: M. A Anjum, M. Y. Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.

Keywords: Biometrics, DCT, Face Recognition, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1460
573 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models

Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales

Abstract:

The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.

Keywords: Concrete bridges, deterioration, Markov chains, probability matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
572 Socio-Demographic Characteristics and Psychosocial Consequences of Sickle Cell Disease: The Case of Patients in a Public Hospital in Ghana

Authors: Vincent A. Adzika, Franklin N. Glozah, Collins S. K. Ahorlu

Abstract:

Background: Sickle Cell Disease (SCD) is of major public-health concern globally, with majority of patients living in Africa. Despite its relevance, there is a dearth of research to determine the socio-demographic distribution and psychosocial impact of SCD in Africa. The objective of this study therefore was to examine the socio-demographic distribution and psychosocial consequences of SCD among patients in Ghana and to assess their quality of life and coping mechanisms. Methods: A cross-sectional research design was used, involving the completion of questionnaires on socio-demographic characteristics, quality of life of individuals, anxiety and depression. Participants were 387 male and female patients attending a sickle cell clinic in a public hospital. Results: Results showed no gender and marital status differences in anxiety and depression. However, there were age and level of education variances in depression but not in anxiety. In terms of quality of life, patients were more satisfied by the presence of love, friends, relatives as well as home, community and neighbourhood environment. While pains of varied nature and severity were the major reasons for attending hospital in SCD condition, going to the hospital as well as having Faith in God was the frequently reported mechanisms for coping with an unbearable SCD attacks. Multiple regression analysis showed that some socio-demographic and quality of life indicators had strong associations with anxiety and/or depression. Conclusion: It is recommended that a multi-dimensional intervention strategy incorporating psychosocial dimensions should be considered in the treatment and management of SCD.

Keywords: Sickle cell disease, quality of life, anxiety, depression, socio-demographic characteristics, Ghana.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1769
571 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation

Authors: Ke He, Wumaier Parezhati, Haruka Yamashita

Abstract:

Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.

Keywords: Doc2Vec, marketing, online marketplace, recommendation system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 424
570 Multi-Scale Gabor Feature Based Eye Localization

Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Dusik Oh, Jaemin Kim, Seongwon Cho

Abstract:

Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported so far still need to be improved about precision and computational time for successful applications. In this paper, we propose an eye location method based on multi-scale Gabor feature vectors, which is more robust with respect to initial points. The eye localization based on Gabor feature vectors first needs to constructs an Eye Model Bunch for each eye (left or right eye) which consists of n Gabor jets and average eye coordinates of each eyes obtained from n model face images, and then tries to localize eyes in an incoming face image by utilizing the fact that the true eye coordinates is most likely to be very close to the position where the Gabor jet will have the best Gabor jet similarity matching with a Gabor jet in the Eye Model Bunch. Similar ideas have been already proposed in such as EBGM (Elastic Bunch Graph Matching). However, the method used in EBGM is known to be not robust with respect to initial values and may need extensive search range for achieving the required performance, but extensive search ranges will cause much more computational burden. In this paper, we propose a multi-scale approach with a little increased computational burden where one first tries to localize eyes based on Gabor feature vectors in a coarse face image obtained from down sampling of the original face image, and then localize eyes based on Gabor feature vectors in the original resolution face image by using the eye coordinates localized in the coarse scaled image as initial points. Several experiments and comparisons with other eye localization methods reported in the other papers show the efficiency of our proposed method.

Keywords: Eye Localization, Gabor features, Multi-scale, Gabor wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793
569 A Robust Method for Finding Nearest-Neighbor using Hexagon Cells

Authors: Ahmad Attiq Al-Ogaibi, Ahmad Sharieh, Moh’d Belal Al-Zoubi, R. Bremananth

Abstract:

In pattern clustering, nearest neighborhood point computation is a challenging issue for many applications in the area of research such as Remote Sensing, Computer Vision, Pattern Recognition and Statistical Imaging. Nearest neighborhood computation is an essential computation for providing sufficient classification among the volume of pixels (voxels) in order to localize the active-region-of-interests (AROI). Furthermore, it is needed to compute spatial metric relationships of diverse area of imaging based on the applications of pattern recognition. In this paper, we propose a new methodology for finding the nearest neighbor point, depending on making a virtually grid of a hexagon cells, then locate every point beneath them. An algorithm is suggested for minimizing the computation and increasing the turnaround time of the process. The nearest neighbor query points Φ are fetched by seeking fashion of hexagon holistic. Seeking will be repeated until an AROI Φ is to be expected. If any point Υ is located then searching starts in the nearest hexagons in a circular way. The First hexagon is considered be level 0 (L0) and the surrounded hexagons is level 1 (L1). If Υ is located in L1, then search starts in the next level (L2) to ensure that Υ is the nearest neighbor for Φ. Based on the result and experimental results, we found that the proposed method has an advantage over the traditional methods in terms of minimizing the time complexity required for searching the neighbors, in turn, efficiency of classification will be improved sufficiently.

Keywords: Hexagon cells, k-nearest neighbors, Nearest Neighbor, Pattern recognition, Query pattern, Virtually grid

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2760
568 An Integrated Experimental and Numerical Approach to Develop an Electronic Instrument to Study Apple Bruise Damage

Authors: Paula Pascoal-Faria, Rúben Pereira, Elodie Pinto, Miguel Belbut, Ana Rosa, Inês Sousa, Nuno Alves

Abstract:

Apple bruise damage from harvesting, handling, transporting and sorting is considered to be the major source of reduced fruit quality, resulting in loss of profits for the entire fruit industry. The three factors which can physically cause fruit bruising are vibration, compression load and impact, the latter being the most common source of bruise damage. Therefore, prediction of the level of damage, stress distribution and deformation of the fruits under external force has become a very important challenge. In this study, experimental and numerical methods were used to better understand the impact caused when an apple is dropped from different heights onto a plastic surface and a conveyor belt. Results showed that the extent of fruit damage is significantly higher for plastic surface, being dependent on the height. In order to support the development of a biomimetic electronic device for the determination of fruit damage, the mechanical properties of the apple fruit were determined using mechanical tests. Preliminary results showed different values for the Young’s modulus according to the zone of the apple tested. Along with the mechanical characterization of the apple fruit, the development of the first two prototypes is discussed and the integration of the results obtained to construct the final element model of the apple is presented. This work will help to reduce significantly the bruise damage of fruits or vegetables during the entire processing which will allow the introduction of exportation destines and consequently an increase in the economic profits in this sector.

Keywords: Apple, fruit damage, impact during crop and post-crop, mechanical characterization of the apple, numerical evaluation of fruit bruise damage, electronic device.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
567 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: Video tracking, particle filter, greedy snake, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1166
566 Sewer Culvert Installation Method to Accommodate Underground Construction in an Urban Area with Narrow Streets (The Development of Shield Switching Type Micro-Tunneling Method and the Introduction of Construction Examples)

Authors: Osamu Igawa, Hiroshi Kouchiwa, Yuji Ito

Abstract:

In recent years, a reconstruction project for sewer  pipelines has been progressing in Japan with the aim of renewing old  sewer culverts. However, it is difficult to secure a sufficient base area  for shafts in an urban area because many streets are narrow with a  complex layout. As a result, construction in such urban areas is  generally very demanding.  In urban areas, there is a strong requirement for a safe, reliable and  economical construction method that does not disturb the public’s  daily life and urban activities. With this in mind, we developed a new  construction method called the “shield switching type micro-tunneling  method,” which integrates the micro-tunneling method and shield  method.  In this method, pipeline is constructed first for sections that are  gently curved or straight using the economical micro-tunneling  method, and then the method is switched to the shield method for  sections with a sharp curve or a series of curves without establishing  an intermediate shaft.  This paper provides the information, features and construction  examples of this newly developed method.

 

Keywords: Micro-tunneling method, Secondary lining applied RC segment, Sharp curve, Shield method, Switching type.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2115
565 Combating Money Laundering in the Banking Industry: Malaysian Experience

Authors: Aspalella A. Rahman

Abstract:

Money laundering has been described by many as the lifeblood of crime and is a major threat to the economic and social well-being of societies. It has been recognized that the banking system has long been the central element of money laundering. This is in part due to the complexity and confidentiality of the banking system itself. It is generally accepted that effective anti-money laundering (AML) measures adopted by banks will make it tougher for criminals to get their "dirty money" into the financial system. In fact, for law enforcement agencies, banks are considered to be an important source of valuable information for the detection of money laundering. However, from the banks- perspective, the main reason for their existence is to make as much profits as possible. Hence their cultural and commercial interests are totally distinct from that of the law enforcement authorities. Undoubtedly, AML laws create a major dilemma for banks as they produce a significant shift in the way banks interact with their customers. Furthermore, the implementation of the laws not only creates significant compliance problems for banks, but also has the potential to adversely affect the operations of banks. As such, it is legitimate to ask whether these laws are effective in preventing money launderers from using banks, or whether they simply put an unreasonable burden on banks and their customers. This paper attempts to address these issues and analyze them against the background of the Malaysian AML laws. It must be said that effective coordination between AML regulator and the banking industry is vital to minimize problems faced by the banks and thereby to ensure effective implementation of the laws in combating money laundering.

Keywords: Banking Industry, Bank Negara Money, Laundering, Malaysia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4255
564 Application of HSA and GA in Optimal Placement of FACTS Devices Considering Voltage Stability and Losses

Authors: A. Parizad, A. Khazali, M. Kalantar

Abstract:

Voltage collapse is instability of heavily loaded electric power systems that cause to declining voltages and blackout. Power systems are predicated to become more heavily loaded in the future decade as the demand for electric power rises while economic and environmental concerns limit the construction of new transmission and generation capacity. Heavily loaded power systems are closer to their stability limits and voltage collapse blackouts will occur if suitable monitoring and control measures are not taken. To control transmission lines, it can be used from FACTS devices. In this paper Harmony search algorithm (HSA) and Genetic Algorithm (GA) have applied to determine optimal location of FACTS devices in a power system to improve power system stability. Three types of FACTS devices (TCPAT, UPFS, and SVC) have been introduced. Bus under voltage has been solved by controlling reactive power of shunt compensator. Also a combined series-shunt compensators has been also used to control transmission power flow and bus voltage simultaneously. Different scenarios have been considered. First TCPAT, UPFS, and SVC are placed solely in transmission lines and indices have been calculated. Then two types of above controller try to improve parameters randomly. The last scenario tries to make better voltage stability index and losses by implementation of three types controller simultaneously. These scenarios are executed on typical 34-bus test system and yields efficiency in improvement of voltage profile and reduction of power losses; it also may permit an increase in power transfer capacity, maximum loading, and voltage stability margin.

Keywords: FACTS Devices, Voltage Stability Index, optimal location, Heuristic methods, Harmony search, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
563 The Risk Assessment of Nano-particles and Investigation of Their Environmental Impact

Authors: Nader Nabhani, Amir Tofighi

Abstract:

Nanotechnology is the science of creating, using and manipulating objects which have at least one dimension in range of 0.1 to 100 nanometers. In other words, nanotechnology is reconstructing a substance using its individual atoms and arranging them in a way that is desirable for our purpose. The main reason that nanotechnology has been attracting attentions is the unique properties that objects show when they are formed at nano-scale. These differing characteristics that nano-scale materials show compared to their nature-existing form is both useful in creating high quality products and dangerous when being in contact with body or spread in environment. In order to control and lower the risk of such nano-scale particles, the main following three topics should be considered: 1) First of all, these materials would cause long term diseases that may show their effects on body years after being penetrated in human organs and since this science has become recently developed in industrial scale not enough information is available about their hazards on body. 2) The second is that these particles can easily spread out in environment and remain in air, soil or water for very long time, besides their high ability to penetrate body skin and causing new kinds of diseases. 3) The third one is that to protect body and environment against the danger of these particles, the protective barriers must be finer than these small objects and such defenses are hard to accomplish. This paper will review, discuss and assess the risks that human and environment face as this new science develops at a high rate.

Keywords: Nanotechnology, risk assessment, environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1955
562 Image Classification and Accuracy Assessment Using the Confusion Matrix, Contingency Matrix, and Kappa Coefficient

Authors: F. F. Howard, C. B. Boye, I. Yakubu, J. S. Y. Kuma

Abstract:

One of the ways that could be used for the production of land use and land cover maps by a procedure known as image classification is the use of the remote sensing technique. Numerous elements ought to be taken into consideration, including the availability of highly satisfactory Landsat imagery, secondary data and a precise classification process. The goal of this study was to classify and map the land use and land cover of the study area using remote sensing and Geospatial Information System (GIS) analysis. The classification was done using Landsat 8 satellite images acquired in December 2020 covering the study area. The Landsat image was downloaded from the USGS. The Landsat image with 30 m resolution was geo-referenced to the WGS_84 datum and Universal Transverse Mercator (UTM) Zone 30N coordinate projection system. A radiometric correction was applied to the image to reduce the noise in the image. This study consists of two sections: the Land Use/Land Cover (LULC) and Accuracy Assessments using the confusion and contingency matrix and the Kappa coefficient. The LULC classifications were vegetation (agriculture) (67.87%), water bodies (0.01%), mining areas (5.24%), forest (26.02%), and settlement (0.88%). The overall accuracy of 97.87% and the kappa coefficient (K) of 97.3% were obtained for the confusion matrix. While an overall accuracy of 95.7% and a Kappa coefficient of 0.947 were obtained for the contingency matrix, the kappa coefficients were rated as substantial; hence, the classified image is fit for further research.

Keywords: Confusion Matrix, contingency matrix, kappa coefficient, land used/ land cover, accuracy assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174
561 Evaluation of Aquifer Protective Capacity and Soil Corrosivity Using Geoelectrical Method

Authors: M. T. Tsepav, Y. Adamu, M. A. Umar

Abstract:

A geoelectric survey was carried out in some parts of Angwan Gwari, an outskirt of Lapai Local Government Area on Niger State which belongs to the Nigerian Basement Complex, with the aim of evaluating the soil corrosivity, aquifer transmissivity and protective capacity of the area from which aquifer characterisation was made. The G41 Resistivity Meter was employed to obtain fifteen Schlumberger Vertical Electrical Sounding data along profiles in a square grid network. The data were processed using interpex 1-D sounding inversion software, which gives vertical electrical sounding curves with layered model comprising of the apparent resistivities, overburden thicknesses, and depth. This information was used to evaluate longitudinal conductance and transmissivities of the layers. The results show generally low resistivities across the survey area and an average longitudinal conductance variation from 0.0237Siemens in VES 6 to 0.1261Siemens in VES 15 with almost the entire area giving values less than 1.0 Siemens. The average transmissivity values range from 96.45 Ω.m2 in VES 4 to 299070 Ω.m2 in VES 1. All but VES 4 and VES14 had an average overburden greater than 400 Ω.m2, these results suggest that the aquifers are highly permeable to fluid movement within, leading to the possibility of enhanced migration and circulation of contaminants in the groundwater system and that the area is generally corrosive.

Keywords: Geoelectric survey, corrosivity, protective capacity, transmissivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2205
560 Alumina Supported Cu-Mn-La Catalysts for CO and VOCs Oxidation

Authors: Elitsa N. Kolentsova, Dimitar Y. Dimitrov, Petya Cv. Petrova, Georgi V. Avdeev, Diana D. Nihtianova, Krasimir I. Ivanov, Tatyana T. Tabakova

Abstract:

Recently, copper and manganese-containing systems are recognized as active and selective catalysts in many oxidation reactions. The main idea of this study is to obtain more information about γ-Al2O3 supported Cu-La catalysts and to evaluate their activity to simultaneous oxidation of CO, CH3OH and dimethyl ether (DME). The catalysts were synthesized by impregnation of support with a mixed aqueous solution of nitrates of copper, manganese and lanthanum under different conditions. XRD, HRTEM/EDS, TPR and thermal analysis were performed to investigate catalysts’ bulk and surface properties. The texture characteristics were determined by Quantachrome Instruments NOVA 1200e specific surface area and pore analyzer. The catalytic measurements of single compounds oxidation were carried out on continuous flow equipment with a four-channel isothermal stainless steel reactor in a wide temperature range. On the basis of XRD analysis and HRTEM/EDS, it was concluded that the active component of the mixed Cu-Mn-La/γ–alumina catalysts strongly depends on the Cu/Mn molar ratio and consisted of at least four compounds – CuO, La2O3, MnO2 and Cu1.5Mn1.5O4. A homogeneous distribution of the active component on the carrier surface was found. The chemical composition strongly influenced catalytic properties. This influence was quite variable with regards to the different processes.

Keywords: Supported copper-manganese-lanthanum catalysts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1172
559 Korea and Japan Economic Relations: An Analysis through the World Trade Organization

Authors: Caroline S. Dutra, Tatiana C. Squeff

Abstract:

It is well known that the history between South Korea and Japan influences their international relations; thus, also encompassing their economic relations. In this sense, it is impossible to analyze the latter without understanding the development of the former, which is known for episodes of hostility, like on Japanese colonization, but also had moments of cultural and trade interexchange. Indeed, since 1965, with the establishment of diplomatic relations between both countries, their trade relations have improved, especially after both nations have signed the General Agreement on Tariffs and Trade (GATT). Thereafter, with the establishment of the World Trade Organization (WTO) in 1995, another chapter of their diplomatic and economic relations have been inaugurated. Hence, bearing in mind this history between both nations, this research intends to examine their relations through the analysis of the WTO panels they have engaged in between each other, which are, in chronological order, “DS323: Japan – Import Quotas on Dried Laver and Seasoned Laver”, “DS336: Japan - Countervailing Duties on Dynamic Random Access Memories from Korea”, “DS495: Korea - Import Band, and Testing and Certification Requirements for Radionuclides”, “DS553: Korea - Sunset Review of Anti-Dumping Duties on Stainless Steel Bars” and “DS571: Korea - Measures Affecting Trade in Commercial Vessels”. The objective of this case analysis is to point out what are the areas that are more conflictual between Japan and South Korea in regard to their economic relations so that it is possible to assert on their future (economic) relations and other possible outcomes. And in order to do so, bibliographic and documental research will be made, particularly those involving the WTO and the nations under consideration. Regarding the methods used, it is important to highlight that this is applied research in the field of international economic relations and international law, which follows a hypothetic-deductive model.

Keywords: International economic relations, Japan, South Korea, World Trade Organization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 904
558 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm

Authors: Xiang Jianhong, Wang Cong, Wang Linyu

Abstract:

With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.

Keywords: telemedicine, fetal electrocardiogram, compressed sensing, joint sparse reconstruction, block sparse signal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 431
557 Evaluation of Energy and Environmental Aspects of Reduced Tillage Systems Applied in Maize Cultivation

Authors: E. Sarauskis, L. Masilionyte, Z. Kriauciuniene, K. Romaneckas, S. Buragiene

Abstract:

In maize growing technologies, tillage technological operations are the most time-consuming and require the greatest fuel input. Substitution of conventional tillage, involving deep ploughing, by other reduced tillage methods can reduce technological production costs, diminish soil degradation and environmental pollution from greenhouse gas emissions, as well as improve economic competitiveness of agricultural produce.

Experiments designed to assess energy and environmental aspects associated with different reduced tillage systems, applied in maize cultivation were conducted at Aleksandras Stulginskis University taking into account Lithuania’s economic and climate conditions. The study involved 5 tillage treatments: deep ploughing (DP, control), shallow ploughing (SP), deep cultivation (DC), shallow cultivation (SC) and no-tillage (NT).

Our experimental evidence suggests that with the application of reduced tillage systems it is feasible to reduce fuel consumption by 13-58% and working time input by 8.4% to nearly 3-fold, to reduce the cost price of maize cultivation technological operations, decrease environmental pollution with CO2 gas by 30 to 146 kg ha-1, compared with the deep ploughing.

Keywords: Reduced tillage, energy and environmental assessment, fuel consumption, CO2 emission, maize.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
556 Diagnosing Dangerous Arrhythmia of Patients by Automatic Detecting of QRS Complexes in ECG

Authors: Jia-Rong Yeh, Ai-Hsien Li, Jiann-Shing Shieh, Yen-An Su, Chi-Yu Yang

Abstract:

In this paper, an automatic detecting algorithm for QRS complex detecting was applied for analyzing ECG recordings and five criteria for dangerous arrhythmia diagnosing are applied for a protocol type of automatic arrhythmia diagnosing system. The automatic detecting algorithm applied in this paper detected the distribution of QRS complexes in ECG recordings and related information, such as heart rate and RR interval. In this investigation, twenty sampled ECG recordings of patients with different pathologic conditions were collected for off-line analysis. A combinative application of four digital filters for bettering ECG signals and promoting detecting rate for QRS complex was proposed as pre-processing. Both of hardware filters and digital filters were applied to eliminate different types of noises mixed with ECG recordings. Then, an automatic detecting algorithm of QRS complex was applied for verifying the distribution of QRS complex. Finally, the quantitative clinic criteria for diagnosing arrhythmia were programmed in a practical application for automatic arrhythmia diagnosing as a post-processor. The results of diagnoses by automatic dangerous arrhythmia diagnosing were compared with the results of off-line diagnoses by experienced clinic physicians. The results of comparison showed the application of automatic dangerous arrhythmia diagnosis performed a matching rate of 95% compared with an experienced physician-s diagnoses.

Keywords: Signal processing, electrocardiography (ECG), QRS complex, arrhythmia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
555 Designing an Editorialization Environment for Repeatable Self-Correcting Exercises

Authors: M. Kobylanski, D. Buskulic, P.-H. Duron, D. Revuz, F. Ruggieri, E. Sandier, C. Tijus

Abstract:

In order to design a cooperative e-learning platform, we observed teams of Teacher [T], Computer Scientist [CS] and exerciser's programmer-designer [ED] cooperating for the conception of a self-correcting exercise, but without the use of such a device in order to catch the kind of interactions a useful platform might provide. To do so, we first run a task analysis on how T, CS and ED should be cooperating in order to achieve, at best, the task of creating and implementing self-directed, self-paced, repeatable self-correcting exercises (RSE) in the context of open educational resources. The formalization of the whole process was based on the “objectives, activities and evaluations” theory of educational task analysis. Second, using the resulting frame as a “how-to-do it” guide, we run a series of three contrasted Hackathon of RSE-production to collect data about the cooperative process that could be later used to design the collaborative e-learning platform. Third, we used two complementary methods to collect, to code and to analyze the adequate survey data: the directional flow of interaction among T-CS-ED experts holding a functional role, and the Means-End Problem Solving analysis. Fourth, we listed the set of derived recommendations useful for the design of the exerciser as a cooperative e-learning platform. Final recommendations underline the necessity of building (i) an ecosystem that allows to sustain teams of T-CS-ED experts, (ii) a data safety platform although offering accessibility and open discussion about the production of exercises with their resources and (iii) a good architecture allowing the inheritance of parts of the coding of any exercise already in the data base as well as fast implementation of new kinds of exercises along with their associated learning activities.

Keywords: Distance open educational resources, pedagogical alignment, self-correcting exercises, teacher’s involvement, team roles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 468
554 Dengue Disease Mapping with Standardized Morbidity Ratio and Poisson-gamma Model: An Analysis of Dengue Disease in Perak, Malaysia

Authors: N. A. Samat, S. H. Mohd Imam Ma’arof

Abstract:

Dengue disease is an infectious vector-borne viral disease that is commonly found in tropical and sub-tropical regions, especially in urban and semi-urban areas, around the world and including Malaysia. There is no currently available vaccine or chemotherapy for the prevention or treatment of dengue disease. Therefore prevention and treatment of the disease depend on vector surveillance and control measures. Disease risk mapping has been recognized as an important tool in the prevention and control strategies for diseases. The choice of statistical model used for relative risk estimation is important as a good model will subsequently produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for dengue disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and one of the earliest applications of Bayesian methodology called Poisson-gamma model. This paper begins by providing a review of the SMR method, which we then apply to dengue data of Perak, Malaysia. We then fit an extension of the SMR method, which is the Poisson-gamma model. Both results are displayed and compared using graph, tables and maps. Results of the analysis shows that the latter method gives a better relative risk estimates compared with using the SMR. The Poisson-gamma model has been demonstrated can overcome the problem of SMR when there is no observed dengue cases in certain regions. However, covariate adjustment in this model is difficult and there is no possibility for allowing spatial correlation between risks in adjacent areas. The drawbacks of this model have motivated many researchers to propose other alternative methods for estimating the risk.

Keywords: Dengue disease, Disease mapping, Standardized Morbidity Ratio, Poisson-gamma model, Relative risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3240
553 Night-Time Traffic Light Detection Based On SVM with Geometric Moment Features

Authors: Hyun-Koo Kim, Young-Nam Shin, Sa-gong Kuk, Ju H. Park, Ho-Youl Jung

Abstract:

This paper presents an effective traffic lights detection method at the night-time. First, candidate blobs of traffic lights are extracted from RGB color image. Input image is represented on the dominant color domain by using color transform proposed by Ruta, then red and green color dominant regions are selected as candidates. After candidate blob selection, we carry out shape filter for noise reduction using information of blobs such as length, area, area of boundary box, etc. A multi-class classifier based on SVM (Support Vector Machine) applies into the candidates. Three kinds of features are used. We use basic features such as blob width, height, center coordinate, area, area of blob. Bright based stochastic features are also used. In particular, geometric based moment-s values between candidate region and adjacent region are proposed and used to improve the detection performance. The proposed system is implemented on Intel Core CPU with 2.80 GHz and 4 GB RAM and tested with the urban and rural road videos. Through the test, we show that the proposed method using PF, BMF, and GMF reaches up to 93 % of detection rate with computation time of in average 15 ms/frame.

Keywords: Night-time traffic light detection, multi-class classification, driving assistance system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3840
552 Transformer Life Enhancement Using Dynamic Switching of Second Harmonic Feature in IEDs

Authors: K. N. Dinesh Babu, P. K. Gargava

Abstract:

Energization of a transformer results in sudden flow of current which is an effect of core magnetization. This current will be dominated by the presence of second harmonic, which in turn is used to segregate fault and inrush current, thus guaranteeing proper operation of the relay. This additional security in the relay sometimes obstructs or delays differential protection in a specific scenario, when the 2nd harmonic content was present during a genuine fault. This kind of scenario can result in isolation of the transformer by Buchholz and pressure release valve (PRV) protection, which is acted when fault creates more damage in transformer. Such delays involve a huge impact on the insulation failure, and chances of repairing or rectifying fault of problem at site become very dismal. Sometimes this delay can cause fire in the transformer, and this situation becomes havoc for a sub-station. Such occurrences have been observed in field also when differential relay operation was delayed by 10-15 ms by second harmonic blocking in some specific conditions. These incidences have led to the need for an alternative solution to eradicate such unwarranted delay in operation in future. Modern numerical relay, called as intelligent electronic device (IED), is embedded with advanced protection features which permit higher flexibility and better provisions for tuning of protection logic and settings. Such flexibility in transformer protection IEDs, enables incorporation of alternative methods such as dynamic switching of second harmonic feature for blocking the differential protection with additional security. The analysis and precautionary measures carried out in this case, have been simulated and discussed in this paper to ensure that similar solutions can be adopted to inhibit analogous issues in future.

Keywords: Differential protection, intelligent electronic device (IED), 2nd harmonic, inrush inhibit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014
551 Numerical Study on CO2 Pollution in an Ignition Chamber by Oxygen Enrichment

Authors: Zohreh Orshesh

Abstract:

In this study, a 3D combustion chamber was simulated using FLUENT 6.32. Aims to obtain accurate information about the profile of the combustion in the furnace and also check the effect of oxygen enrichment on the combustion process. Oxygen enrichment is an effective way to reduce combustion pollutant. The flow rate of air to fuel ratio is varied as 1.3, 3.2 and 5.1 and the oxygen enriched flow rates are 28, 54 and 68 lit/min. Combustion simulations typically involve the solution of the turbulent flows with heat transfer, species transport and chemical reactions. It is common to use the Reynolds-averaged form of the governing equation in conjunction with a suitable turbulence model. The 3D Reynolds Averaged Navier Stokes (RANS) equations with standard k-ε turbulence model are solved together by Fluent 6.3 software. First order upwind scheme is used to model governing equations and the SIMPLE algorithm is used as pressure velocity coupling. Species mass fractions at the wall are assumed to have zero normal gradients.Results show that minimum mole fraction of CO2 happens when the flow rate ratio of air to fuel is 5.1. Additionally, in a fixed oxygen enrichment condition, increasing the air to fuel ratio will increase the temperature peak. As a result, oxygen-enrichment can reduce the CO2 emission at this kind of furnace in high air to fuel rates.

Keywords: Combustion chamber, Oxygen enrichment, Reynolds Averaged Navier- Stokes, CO2 emission

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507
550 A Study on the Condition Monitoring of Transmission Line by On-line Circuit Parameter Measurement

Authors: Il Dong Kim, Jin Rak Lee, Young Jun Ko, Young Taek Jin

Abstract:

An on-line condition monitoring method for transmission line is proposed using electrical circuit theory and IT technology in this paper. It is reasonable that the circuit parameters such as resistance (R), inductance (L), conductance (g) and capacitance (C) of a transmission line expose the electrical conditions and physical state of the line. Those parameters can be calculated from the linear equation composed of voltages and currents measured by synchro-phasor measurement technique at both end of the line. A set of linear voltage drop equations containing four terminal constants (A, B ,C ,D ) are mathematical models of the transmission line circuits. At least two sets of those linear equations are established from different operation condition of the line, they may mathematically yield those circuit parameters of the line. The conditions of line connectivity including state of connecting parts or contacting parts of the switching device may be monitored by resistance variations during operation. The insulation conditions of the line can be monitored by conductance (g) and capacitance(C) measurements. Together with other condition monitoring devices such as partial discharge, sensors and visual sensing device etc.,they may give useful information to monitor out any incipient symptoms of faults. The prototype of hardware system has been developed and tested through laboratory level simulated transmission lines. The test has shown enough evident to put the proposed method to practical uses.

Keywords: Transmission Line, Condition Monitoring, Circuit Parameters, Synchro- phasor Measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3166
549 Effective Traffic Lights Recognition Method for Real Time Driving Assistance Systemin the Daytime

Authors: Hyun-Koo Kim, Ju H. Park, Ho-Youl Jung

Abstract:

This paper presents an effective traffic lights recognition method at the daytime. First, Potential Traffic Lights Detector (PTLD) use whole color source of YCbCr channel image and make each binary image of green and red traffic lights. After PTLD step, Shape Filter (SF) use to remove noise such as traffic sign, street tree, vehicle, and building. At this time, noise removal properties consist of information of blobs of binary image; length, area, area of boundary box, etc. Finally, after an intermediate association step witch goal is to define relevant candidates region from the previously detected traffic lights, Adaptive Multi-class Classifier (AMC) is executed. The classification method uses Haar-like feature and Adaboost algorithm. For simulation, we are implemented through Intel Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and rural roads. Through the test, we are compared with our method and standard object-recognition learning processes and proved that it reached up to 94 % of detection rate which is better than the results achieved with cascade classifiers. Computation time of our proposed method is 15 ms.

Keywords: Traffic Light Detection, Multi-class Classification, Driving Assistance System, Haar-like Feature, Color SegmentationMethod, Shape Filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2748
548 A Study of Shear Stress Intensity Factor of PP and HDPE by a Modified Experimental Method together with FEM

Authors: Md. Shafiqul Islam, Abdullah Khan, Sharon Kao-Walter, Li Jian

Abstract:

Shear testing is one of the most complex testing areas where available methods and specimen geometries are different from each other. Therefore, a modified shear test specimen (MSTS) combining the simple uniaxial test with a zone of interest (ZOI) is tested which gives almost the pure shear. In this study, material parameters of polypropylene (PP) and high density polyethylene (HDPE) are first measured by tensile tests with a dogbone shaped specimen. These parameters are then used as an input for the finite element analysis. Secondly, a specially designed specimen (MSTS) is used to perform the shear stress tests in a tensile testing machine to get the results in terms of forces and extension, crack initiation etc. Scanning Electron Microscopy (SEM) is also performed on the shear fracture surface to find material behavior. These experiments are then simulated by finite element method and compared with the experimental results in order to confirm the simulation model. Shear stress state is inspected to find the usability of the proposed shear specimen. Finally, a geometry correction factor can be established for these two materials in this specific loading and geometry with notch using Linear Elastic Fracture Mechanics (LEFM). By these results, strain energy of shear failure and stress intensity factor (SIF) of shear of these two polymers are discussed in the special application of the screw cap opening of the medical or food packages with a temper evidence safety solution.

Keywords: Shear test specimen, Stress intensity factor, Finite Element simulation, Scanning electron microscopy, Screw cap opening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2898
547 Enhancing Cache Performance Based on Improved Average Access Time

Authors: Jasim. A. Ghaeb

Abstract:

A high performance computer includes a fast processor and millions bytes of memory. During the data processing, huge amount of information are shuffled between the memory and processor. Because of its small size and its effectiveness speed, cache has become a common feature of high performance computers. Enhancing cache performance proved to be essential in the speed up of cache-based computers. Most enhancement approaches can be classified as either software based or hardware controlled. The performance of the cache is quantified in terms of hit ratio or miss ratio. In this paper, we are optimizing the cache performance based on enhancing the cache hit ratio. The optimum cache performance is obtained by focusing on the cache hardware modification in the way to make a quick rejection to the missed line's tags from the hit-or miss comparison stage, and thus a low hit time for the wanted line in the cache is achieved. In the proposed technique which we called Even- Odd Tabulation (EOT), the cache lines come from the main memory into cache are classified in two types; even line's tags and odd line's tags depending on their Least Significant Bit (LSB). This division is exploited by EOT technique to reject the miss match line's tags in very low time compared to the time spent by the main comparator in the cache, giving an optimum hitting time for the wanted cache line. The high performance of EOT technique against the familiar mapping technique FAM is shown in the simulated results.

Keywords: Caches, Cache performance, Hit time, Cache hit ratio, Cache mapping, Cache memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643