Search results for: panel data method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38598

Search results for: panel data method

36348 Facility Detection from Image Using Mathematical Morphology

Authors: In-Geun Lim, Sung-Woong Ra

Abstract:

As high resolution satellite images can be used, lots of studies are carried out for exploiting these images in various fields. This paper proposes the method based on mathematical morphology for extracting the ‘horse's hoof shaped object’. This proposed method can make an automatic object detection system to track the meaningful object in a large satellite image rapidly. Mathematical morphology process can apply in binary image, so this method is very simple. Therefore this method can easily extract the ‘horse's hoof shaped object’ from any images which have indistinct edges of the tracking object and have different image qualities depending on filming location, filming time, and filming environment. Using the proposed method by which ‘horse's hoof shaped object’ can be rapidly extracted, the performance of the automatic object detection system can be improved dramatically.

Keywords: facility detection, satellite image, object, mathematical morphology

Procedia PDF Downloads 382
36347 Development of a Biomechanical Method for Ergonomic Evaluation: Comparison with Observational Methods

Authors: M. Zare, S. Biau, M. Corq, Y. Roquelaure

Abstract:

A wide variety of observational methods have been developed to evaluate the ergonomic workloads in manufacturing. However, the precision and accuracy of these methods remain a subject of debate. The aims of this study were to develop biomechanical methods to evaluate ergonomic workloads and to compare them with observational methods. Two observational methods, i.e. SCANIA Ergonomic Standard (SES) and Rapid Upper Limb Assessment (RULA), were used to assess ergonomic workloads at two simulated workstations. They included four tasks such as tightening & loosening, attachment of tubes and strapping as well as other actions. Sensors were also used to measure biomechanical data (Inclinometers, Accelerometers, and Goniometers). Our findings showed that in assessment of some risk factors both RULA & SES were in agreement with the results of biomechanical methods. However, there was disagreement on neck and wrist postures. In conclusion, the biomechanical approach was more precise than observational methods, but some risk factors evaluated with observational methods were not measurable with the biomechanical techniques developed.

Keywords: ergonomic, observational method, biomechanical methods, workload

Procedia PDF Downloads 388
36346 Investigation of Static Stability of Soil Slopes Using Numerical Modeling

Authors: Seyed Abolhasan Naeini, Elham Ghanbari Alamooti

Abstract:

Static stability of soil slopes using numerical simulation by a finite element code, ABAQUS, has been investigated, and safety factors of the slopes achieved in the case of static load of a 10-storey building. The embankments have the same soil condition but different loading distance from the slope heel. The numerical method for estimating safety factors is 'Strength Reduction Method' (SRM). Mohr-Coulomb criterion used in the numerical simulations. Two steps used for measuring the safety factors of the slopes: first is under gravity loading, and the second is under static loading of a building near the slope heel. These safety factors measured from SRM, are compared with the values from Limit Equilibrium Method, LEM. Results show that there is good agreement between SRM and LEM. Also, it is seen that by increasing the distance from slope heel, safety factors increases.

Keywords: limit equilibrium method, static stability, soil slopes, strength reduction method

Procedia PDF Downloads 163
36345 The Effect of Nylon and Kevlar Stitching on the Mode I Fracture of Carbon/Epoxy Composites

Authors: Nisrin R. Abdelal, Steven L. Donaldson

Abstract:

Composite materials are widely used in aviation industry due to their superior properties; however, they are susceptible to delamination. Through-thickness stitching is one of the techniques to alleviate delamination. Kevlar is one of the most common stitching materials; in contrast, it is expensive and presents stitching fabrication challenges. Therefore, this study compares the performance of Kevlar with an inexpensive and easy-to-use nylon fiber in stitching to alleviate delamination. Three laminates of unidirectional carbon fiber-epoxy composites were manufactured using vacuum assisted resin transfer molding process. One panel was stitched with Kevlar, one with nylon, and one unstitched. Mode I interlaminar fracture tests were carried out on specimens from the three composite laminates, and the results were compared. Fractographic analysis using optical and scanning electron microscope were conducted to reveal the differences between stitching with Kevlar and nylon on the internal microstructure of the composite with respect to the interlaminar fracture toughness values.

Keywords: carbon, delamination, Kevlar, mode I, nylon, stitching

Procedia PDF Downloads 287
36344 REDUCER: An Architectural Design Pattern for Reducing Large and Noisy Data Sets

Authors: Apkar Salatian

Abstract:

To relieve the burden of reasoning on a point to point basis, in many domains there is a need to reduce large and noisy data sets into trends for qualitative reasoning. In this paper we propose and describe a new architectural design pattern called REDUCER for reducing large and noisy data sets that can be tailored for particular situations. REDUCER consists of 2 consecutive processes: Filter which takes the original data and removes outliers, inconsistencies or noise; and Compression which takes the filtered data and derives trends in the data. In this seminal article, we also show how REDUCER has successfully been applied to 3 different case studies.

Keywords: design pattern, filtering, compression, architectural design

Procedia PDF Downloads 212
36343 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)

Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira

Abstract:

Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.

Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina

Procedia PDF Downloads 212
36342 Procedural Protocol for Dual Energy Computed Tomography (DECT) Inversion

Authors: Rezvan Ravanfar Haghighi, S. Chatterjee, Pratik Kumar, V. C. Vani, Priya Jagia, Sanjiv Sharma, Susama Rani Mandal, R. Lakshmy

Abstract:

The dual energy computed tomography (DECT) aims at noting the HU(V) values for the sample at two different voltages V=V1, V2 and thus obtain the electron densities (ρe) and effective atomic number (Zeff) of the substance. In the present paper, we aim to obtain a numerical algorithm by which (ρe, Zeff) can be obtained from the HU(100) and HU(140) data, where V=100, 140 kVp. The idea is to use this inversion method to characterize and distinguish between the lipid and fibrous coronary artery plaques.With the idea to develop the inversion algorithm for low Zeff materials, as is the case with non calcified coronary artery plaque, we prepare aqueous samples whose calculated values of (ρe, Zeff) lie in the range (2.65×1023≤ ρe≤ 3.64×1023 per cc ) and (6.80≤ Zeff ≤ 8.90). We fill the phantom with these known samples and experimentally determine HU(100) and HU(140) for the same pixels. Knowing that the HU(V) values are related to the attenuation coefficient of the system, we present an algorithm by which the (ρe, Zeff) is calibrated with respect to (HU(100), HU(140)). The calibration is done with a known set of 20 samples; its accuracy is checked with a different set of 23 known samples. We find that the calibration gives the ρe with an accuracy of ± 4% while Zeff is found within ±1% of the actual value, the confidence being 95%.In this inversion method (ρe, Zeff) of the scanned sample can be found by eliminating the effects of the CT machine and also by ensuring that the determination of the two unknowns (ρe, Zeff) does not interfere with each other. It is found that this algorithm can be used for prediction of chemical characteristic (ρe, Zeff) of unknown scanned materials with 95% confidence level, by inversion of the DECT data.

Keywords: chemical composition, dual-energy computed tomography, inversion algorithm

Procedia PDF Downloads 438
36341 Academic Leadership Succession Planning Practice in Nigeria Higher Education Institutions: A Case Study of Colleges of Education

Authors: Adie, Julius Undiukeye

Abstract:

This research investigated the practice of academic leadership succession planning in Nigerian higher education institutions, drawing on the lived experiences of the academic staff of the case study institutions. It is multi-case study research that adopts a qualitative research method. Ten participants (mainly academic staff) were used as the study sample. The study was guided by four research questions. Semi-structured interviews and archival information from official documents formed the sources of data. The data collected was analyzed using the Constant Comparative Technique (CCT) to generate empirical insights and facts on the subject of this paper. The following findings emerged from the data analysis: firstly, there was no formalized leadership succession plan in place in the institutions that were sampled for this study; secondly, despite the absence of a formal succession plan, the data indicates that academics believe that succession planning is very significant for institutional survival; thirdly, existing practices of succession planning in the sampled institutions, takes the forms of job seniority ranking, political process and executive fiat, ad-hoc arrangement, and external hiring; and finally, data revealed that there are some barriers to the practice of succession planning, such as traditional higher education institutions’ characteristics (e.g. external talent search, shared governance, diversity, and equality in leadership appointment) and the lack of interest in leadership positions. Based on the research findings, some far-reaching recommendations were made, including the urgent need for the ‘formalization’ of leadership succession planning by the higher education institutions concerned, through the design of an official policy framework.

Keywords: academic leadership, succession, planning, higher education

Procedia PDF Downloads 144
36340 Prevalence of ESBL E. coli Susceptibility to Oral Antibiotics in Outpatient Urine Culture: Multicentric, Analysis of Three Years Data (2019-2021)

Authors: Mazoun Nasser Rashid Al Kharusi, Nada Al Siyabi

Abstract:

Objectives: The main aim of this study is to Find the rate of susceptibility of ESBL E. coli causing UTI to oral antibiotics. Secondary objectives: Prevalence of ESBL E. coli from community urine samples, identify the best empirical oral antibiotics with the least resistance rate for UTI and identify alternative oral antibiotics for testing and utilization. Methods: This study is a retrospective descriptive study of the last three years in five major hospitals in Oman (Khowla Hospital, AN’Nahdha Hospital, Rustaq Hospital, Nizwa Hospital, and Ibri Hospital) equipped with a microbiologist. Inclusion criteria include all eligible outpatient urine culture isolates, excluding isolates from admitted patients with hospital-acquired urinary tract infections. Data was collected through the MOH database. The MOH hospitals are using different types of testing, automated methods like Vitek2 and manual methods. Vitek2 machine uses the principle of the fluorogenic method for organism identification and a turbidimetric method for susceptibility testing. The manual method is done by double disc diffusion for identifying ESBL and the disc diffusion method is for antibiotic susceptibility. All laboratories follow the clinical laboratory science institute (CLSI) guidelines. Analysis was done by SPSS statistical package. Results: Total urine cultures were (23048). E. coli grew in (11637) 49.6% of the urine, whereas (2199) 18.8% of those were confirmed as ESBL. As expected, the resistance rate to amoxicillin and cefuroxime is 100%. Moreover, the susceptibility of those ESBL-producing E. coli to nitrofurantoin, trimethoprim+sulfamethoxazole, ciprofloxacin and amoxicillin-clavulanate is progressing over the years; however, still low. ESBL E. coli was predominating in the female gender and those aged 66-74 years old throughout all the years. Other oral antibiotic options need to be explored and tested so that we add to the pool of oral antibiotics for ESBL E. coli causing UTI in the community. Conclusion: High rate of ESBL E. coli in urine from the community. The high resistance rates to oral antibiotics highlight the need for alternative treatment options for UTIs caused by these bacteria. Further research is needed to identify new and effective treatments for UTIs caused by ESBL-E. Coli.

Keywords: UTI, ESBL, oral antibiotics, E. coli, susceptibility

Procedia PDF Downloads 93
36339 Fuzzy Expert Systems Applied to Intelligent Design of Data Centers

Authors: Mario M. Figueroa de la Cruz, Claudia I. Solorzano, Raul Acosta, Ignacio Funes

Abstract:

This technological development project seeks to create a tool that allows companies, in need of implementing a Data Center, intelligently determining factors for allocating resources support cooling and power supply (UPS) in its conception. The results should show clearly the speed, robustness and reliability of a system designed for deployment in environments where they must manage and protect large volumes of data.

Keywords: telecommunications, data center, fuzzy logic, expert systems

Procedia PDF Downloads 345
36338 A Coupled Extended-Finite-Discrete Element Method: On the Different Contact Schemes between Continua and Discontinua

Authors: Shervin Khazaeli, Shahab Haj-zamani

Abstract:

Recently, advanced geotechnical engineering problems related to soil movement, particle loss, and modeling of local failure (i.e. discontinua) as well as modeling the in-contact structures (i.e. continua) are of the great interest among researchers. The aim of this research is to meet the requirements with respect to the modeling of the above-mentioned two different domains simultaneously. To this end, a coupled numerical method is introduced based on Discrete Element Method (DEM) and eXtended-Finite Element Method (X-FEM). In the coupled procedure, DEM is employed to capture the interactions and relative movements of soil particles as discontinua, while X-FEM is utilized to model in-contact structures as continua, which may consist of different types of discontinuities. For verification purposes, the new coupled approach is utilized to examine benchmark problems including different contacts between/within continua and discontinua. Results are validated by comparison with those of existing analytical and numerical solutions. This study proves that extended-finite-discrete element method can be used to robustly analyze not only contact problems, but also other types of discontinuities in continua such as (i) crack formations and propagations, (ii) voids and bimaterial interfaces, and (iii) combination of previous cases. In essence, the proposed method can be used vastly in advanced soil-structure interaction problems to investigate the micro and macro behaviour of the surrounding soil and the response of the embedded structure that contains discontinuities.

Keywords: contact problems, discrete element method, extended-finite element method, soil-structure interaction

Procedia PDF Downloads 505
36337 Domain Adaptive Dense Retrieval with Query Generation

Authors: Rui Yin, Haojie Wang, Xun Li

Abstract:

Recently, mainstream dense retrieval methods have obtained state-of-the-art results on some datasets and tasks. However, they require large amounts of training data, which is not available in most domains. The severe performance degradation of dense retrievers on new data domains has limited the use of dense retrieval methods to only a few domains with large training datasets. In this paper, we propose an unsupervised domain-adaptive approach based on query generation. First, a generative model is used to generate relevant queries for each passage in the target corpus, and then, the generated queries are used for mining negative passages. Finally, the query-passage pairs are labeled with a cross-encoder and used to train a domain-adapted dense retriever. We also explore contrastive learning as a method for training domain-adapted dense retrievers and show that it leads to strong performance in various retrieval settings. Experiments show that our approach is more robust than previous methods in target domains that require less unlabeled data.

Keywords: dense retrieval, query generation, contrastive learning, unsupervised training

Procedia PDF Downloads 104
36336 Analysis Influence Variation Frequency on Characterization of Nano-Particles in Preteatment Bioetanol Oil Palm Stem (Elaeis guineensis JACQ) Use Sonication Method with Alkaline Peroxide Activators on Improvement of Celullose

Authors: Luristya Nur Mahfut, Nada Mawarda Rilek, Ameiga Cautsarina Putri, Mujaroh Khotimah

Abstract:

The use of bioetanol from lignocellulosic material has begone to be developed. In Indonesia the most abundant lignocellulosic material is stem of palm which contain 32.22% of cellulose. Indonesia produces approximatelly 300.375.000 tons of stem of palm each year. To produce bioetanol from lignocellulosic material, the first process is pretreatment. But, until now the method of lignocellulosic pretretament is uneffective. This is related to the particle size and the method of pretreatment of less than optimal so that led to an overhaul of the lignin insufficient, consequently increased levels of cellulose was not significant resulting in low yield of bioetanol. To solve the problem, this research was implemented by using the process of pretreatment method ultasonifikasi in order to produce higher pulp with nano-sized particles that will obtain higher of yield ethanol from stem of palm. Research methods used in this research is the RAK that is composed of one factor which is the frequency ultrasonic waves with three varians, they are 30 kHz, 40 kHz, 50 kHz, and use constant variable is concentration of NaOH. The analysis conducted in this research is the influence of the frequency of the wave to increase levels of cellulose and change size on the scale of nanometers on pretreatment process by using the PSA methods (Particle Size Analyzer), and a Cheason. For the analysis of the results, data, and best treatment using ANOVA and test BNT with confidence interval 5%. The best treatment was obtained by combination X3 (frequency of sonication 50 kHz) and lignin (19,6%) cellulose (59,49%) and hemicellulose (11,8%) with particle size 385,2nm (18,8%).

Keywords: bioethanol, pretreatment, stem of palm, cellulosa

Procedia PDF Downloads 327
36335 Adomian’s Decomposition Method to Generalized Magneto-Thermoelasticity

Authors: Hamdy M. Youssef, Eman A. Al-Lehaibi

Abstract:

Due to many applications and problems in the fields of plasma physics, geophysics, and other many topics, the interaction between the strain field and the magnetic field has to be considered. Adomian introduced the decomposition method for solving linear and nonlinear functional equations. This method leads to accurate, computable, approximately convergent solutions of linear and nonlinear partial and ordinary differential equations even the equations with variable coefficients. This paper is dealing with a mathematical model of generalized thermoelasticity of a half-space conducting medium. A magnetic field with constant intensity acts normal to the bounding plane has been assumed. Adomian’s decomposition method has been used to solve the model when the bounding plane is taken to be traction free and thermally loaded by harmonic heating. The numerical results for the temperature increment, the stress, the strain, the displacement, the induced magnetic, and the electric fields have been represented in figures. The magnetic field, the relaxation time, and the angular thermal load have significant effects on all the studied fields.

Keywords: Adomian’s decomposition method, magneto-thermoelasticity, finite conductivity, iteration method, thermal load

Procedia PDF Downloads 150
36334 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 350
36333 Estimation and Restoration of Ill-Posed Parameters for Underwater Motion Blurred Images

Authors: M. Vimal Raj, S. Sakthivel Murugan

Abstract:

Underwater images degrade their quality due to atmospheric conditions. One of the major problems in an underwater image is motion blur caused by the imaging device or the movement of the object. In order to rectify that in post-imaging, parameters of the blurred image are to be estimated. So, the point spread function is estimated by the properties, using the spectrum of the image. To improve the estimation accuracy of the parameters, Optimized Polynomial Lagrange Interpolation (OPLI) method is implemented after the angle and length measurement of motion-blurred images. Initially, the data were collected from real-time environments in Chennai and processed. The proposed OPLI method shows better accuracy than the existing classical Cepstral, Hough, and Radon transform estimation methods for underwater images.

Keywords: image restoration, motion blur, parameter estimation, radon transform, underwater

Procedia PDF Downloads 176
36332 Exchanging Radiology Reporting System with Electronic Health Record: Designing a Conceptual Model

Authors: Azadeh Bashiri

Abstract:

Introduction: In order to better designing of electronic health record system in Iran, integration of health information systems based on a common language must be done to interpret and exchange this information with this system is required. Background: This study, provides a conceptual model of radiology reporting system using unified modeling language. The proposed model can solve the problem of integration this information system with electronic health record system. By using this model and design its service based, easily connect to electronic health record in Iran and facilitate transfer radiology report data. Methods: This is a cross-sectional study that was conducted in 2013. The student community was 22 experts that working at the Imaging Center in Imam Khomeini Hospital in Tehran and the sample was accorded with the community. Research tool was a questionnaire that prepared by the researcher to determine the information requirements. Content validity and test-retest method was used to measure validity and reliability of questioner respectively. Data analyzed with average index, using SPSS. Also, Visual Paradigm software was used to design a conceptual model. Result: Based on the requirements assessment of experts and related texts, administrative, demographic and clinical data and radiological examination results and if the anesthesia procedure performed, anesthesia data suggested as minimum data set for radiology report and based it class diagram designed. Also by identifying radiology reporting system process, use case was drawn. Conclusion: According to the application of radiology reports in electronic health record system for diagnosing and managing of clinical problem of the patient, provide the conceptual Model for radiology reporting system; in order to systematically design it, the problem of data sharing between these systems and electronic health records system would eliminate.

Keywords: structured radiology report, information needs, minimum data set, electronic health record system in Iran

Procedia PDF Downloads 254
36331 Switching to the Latin Alphabet in Kazakhstan: A Brief Overview of Character Recognition Methods

Authors: Ainagul Yermekova, Liudmila Goncharenko, Ali Baghirzade, Sergey Sybachin

Abstract:

In this article, we address the problem of Kazakhstan's transition to the Latin alphabet. The transition process started in 2017 and is scheduled to be completed in 2025. In connection with these events, the problem of recognizing the characters of the new alphabet is raised. Well-known character recognition programs such as ABBYY FineReader, FormReader, MyScript Stylus did not recognize specific Kazakh letters that were used in Cyrillic. The author tries to give an assessment of the well-known method of character recognition that could be in demand as part of the country's transition to the Latin alphabet. Three methods of character recognition: template, structured, and feature-based, are considered through the algorithms of operation. At the end of the article, a general conclusion is made about the possibility of applying a certain method to a particular recognition process: for example, in the process of population census, recognition of typographic text in Latin, or recognition of photos of car numbers, store signs, etc.

Keywords: text detection, template method, recognition algorithm, structured method, feature method

Procedia PDF Downloads 187
36330 Using Self Organizing Feature Maps for Classification in RGB Images

Authors: Hassan Masoumi, Ahad Salimi, Nazanin Barhemmat, Babak Gholami

Abstract:

Artificial neural networks have gained a lot of interest as empirical models for their powerful representational capacity, multi input and output mapping characteristics. In fact, most feed-forward networks with nonlinear nodal functions have been proved to be universal approximates. In this paper, we propose a new supervised method for color image classification based on self organizing feature maps (SOFM). This algorithm is based on competitive learning. The method partitions the input space using self-organizing feature maps to introduce the concept of local neighborhoods. Our image classification system entered into RGB image. Experiments with simulated data showed that separability of classes increased when increasing training time. In additional, the result shows proposed algorithms are effective for color image classification.

Keywords: classification, SOFM algorithm, neural network, neighborhood, RGB image

Procedia PDF Downloads 478
36329 Genetic Testing and Research in South Africa: The Sharing of Data Across Borders

Authors: Amy Gooden, Meshandren Naidoo

Abstract:

Genetic research is not confined to a particular jurisdiction. Using direct-to-consumer genetic testing (DTC-GT) as an example, this research assesses the status of data sharing into and out of South Africa (SA). While SA laws cover the sending of genetic data out of SA, prohibiting such transfer unless a legal ground exists, the position where genetic data comes into the country depends on the laws of the country from where it is sent – making the legal position less clear.

Keywords: cross-border, data, genetic testing, law, regulation, research, sharing, South Africa

Procedia PDF Downloads 162
36328 Adaptive Discharge Time Control for Battery Operation Time Enhancement

Authors: Jong-Bae Lee, Seongsoo Lee

Abstract:

This paper proposes an adaptive discharge time control method to balance cell voltages in alternating battery cell discharging method. In the alternating battery cell discharging method, battery cells are periodically discharged in turn. Recovery effect increases battery output voltage while the given battery cell rests without discharging, thus battery operation time of target system increases. However, voltage mismatch between cells leads two problems. First, voltage difference between cells induces inter-cell current with wasted power. Second, it degrades battery operation time, since system stops when any cell reaches to the minimum system operation voltage. To solve this problem, the proposed method adaptively controls cell discharge time to equalize both cell voltages. In the proposed method, battery operation time increases about 19%, while alternating battery cell discharging method shows about 7% improvement.

Keywords: battery, recovery effect, low-power, alternating battery cell discharging, adaptive discharge time control

Procedia PDF Downloads 352
36327 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges

Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch

Abstract:

Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.

Keywords: big data interpretation, datathon, systems toxicology, verification

Procedia PDF Downloads 278
36326 Integrated Grey Rational Analysis-Standard Deviation Method for Handover in Heterogeneous Networks

Authors: Mohanad Alhabo, Naveed Nawaz, Mahmoud Al-Faris

Abstract:

The dense deployment of small cells is a promising solution to enhance the coverage and capacity of the heterogeneous networks (HetNets). However, the unplanned deployment could bring new challenges to the network ranging from interference, unnecessary handovers and handover failures. This will cause a degradation in the quality of service (QoS) delivered to the end user. In this paper, we propose an integrated Grey Rational Analysis Standard Deviation based handover method (GRA-SD) for HetNet. The proposed method integrates the Standard Deviation (SD) technique to acquire the weight of the handover metrics and the GRA method to select the best handover base station. The performance of the GRA-SD method is evaluated and compared with the traditional Multiple Attribute Decision Making (MADM) methods including Simple Additive Weighting (SAW) and VIKOR methods. Results reveal that the proposed method has outperformed the other methods in terms of minimizing the number of frequent unnecessary handovers and handover failures, in addition to improving the energy efficiency.

Keywords: energy efficiency, handover, HetNets, MADM, small cells

Procedia PDF Downloads 116
36325 Statistical Analysis with Prediction Models of User Satisfaction in Software Project Factors

Authors: Katawut Kaewbanjong

Abstract:

We analyzed a volume of data and found significant user satisfaction in software project factors. A statistical significance analysis (logistic regression) and collinearity analysis determined the significance factors from a group of 71 pre-defined factors from 191 software projects in ISBSG Release 12. The eight prediction models used for testing the prediction potential of these factors were Neural network, k-NN, Naïve Bayes, Random forest, Decision tree, Gradient boosted tree, linear regression and logistic regression prediction model. Fifteen pre-defined factors were truly significant in predicting user satisfaction, and they provided 82.71% prediction accuracy when used with a neural network prediction model. These factors were client-server, personnel changes, total defects delivered, project inactive time, industry sector, application type, development type, how methodology was acquired, development techniques, decision making process, intended market, size estimate approach, size estimate method, cost recording method, and effort estimate method. These findings may benefit software development managers considerably.

Keywords: prediction model, statistical analysis, software project, user satisfaction factor

Procedia PDF Downloads 124
36324 Recommendations for Teaching Word Formation for Students of Linguistics Using Computer Terminology as an Example

Authors: Svetlana Kostrubina, Anastasia Prokopeva

Abstract:

This research presents a comprehensive study of the word formation processes in computer terminology within English and Russian languages and provides listeners with a system of exercises for training these skills. The originality is that this study focuses on a comparative approach, which shows both general patterns and specific features of English and Russian computer terms word formation. The key point is the system of exercises development for training computer terminology based on Bloom’s taxonomy. Data contain 486 units (228 English terms from the Glossary of Computer Terms and 258 Russian terms from the Terminological Dictionary-Reference Book). The objective is to identify the main affixation models in the English and Russian computer terms formation and to develop exercises. To achieve this goal, the authors employed Bloom’s Taxonomy as a methodological framework to create a systematic exercise program aimed at enhancing students’ cognitive skills in analyzing, applying, and evaluating computer terms. The exercises are appropriate for various levels of learning, from basic recall of definitions to higher-order thinking skills, such as synthesizing new terms and critically assessing their usage in different contexts. Methodology also includes: a method of scientific and theoretical analysis for systematization of linguistic concepts and clarification of the conceptual and terminological apparatus; a method of nominative and derivative analysis for identifying word-formation types; a method of word-formation analysis for organizing linguistic units; a classification method for determining structural types of abbreviations applicable to the field of computer communication; a quantitative analysis technique for determining the productivity of methods for forming abbreviations of computer vocabulary based on the English and Russian computer terms, as well as a technique of tabular data processing for a visual presentation of the results obtained. a technique of interlingua comparison for identifying common and different features of abbreviations of computer terms in the Russian and English languages. The research shows that affixation retains its productivity in the English and Russian computer terms formation. Bloom’s taxonomy allows us to plan a training program and predict the effectiveness of the compiled program based on the assessment of the teaching methods used.

Keywords: word formation, affixation, computer terms, Bloom's taxonomy

Procedia PDF Downloads 14
36323 Estimating Lost Digital Video Frames Using Unidirectional and Bidirectional Estimation Based on Autoregressive Time Model

Authors: Navid Daryasafar, Nima Farshidfar

Abstract:

In this article, we make attempt to hide error in video with an emphasis on the time-wise use of autoregressive (AR) models. To resolve this problem, we assume that all information in one or more video frames is lost. Then, lost frames are estimated using analogous Pixels time information in successive frames. Accordingly, after presenting autoregressive models and how they are applied to estimate lost frames, two general methods are presented for using these models. The first method which is the same standard method of autoregressive models estimates lost frame in unidirectional form. Usually, in such condition, previous frames information is used for estimating lost frame. Yet, in the second method, information from the previous and next frames is used for estimating the lost frame. As a result, this method is known as bidirectional estimation. Then, carrying out a series of tests, performance of each method is assessed in different modes. And, results are compared.

Keywords: error steganography, unidirectional estimation, bidirectional estimation, AR linear estimation

Procedia PDF Downloads 540
36322 New Fourth Order Explicit Group Method in the Solution of the Helmholtz Equation

Authors: Norhashidah Hj Mohd Ali, Teng Wai Ping

Abstract:

In this paper, the formulation of a new group explicit method with a fourth order accuracy is described in solving the two-dimensional Helmholtz equation. The formulation is based on the nine-point fourth-order compact finite difference approximation formula. The complexity analysis of the developed scheme is also presented. Several numerical experiments were conducted to test the feasibility of the developed scheme. Comparisons with other existing schemes will be reported and discussed. Preliminary results indicate that this method is a viable alternative high accuracy solver to the Helmholtz equation.

Keywords: explicit group method, finite difference, Helmholtz equation, five-point formula, nine-point formula

Procedia PDF Downloads 500
36321 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors

Authors: João Madureira, Ricardo Lagido, Inês Sousa, Fraunhofer Portugal

Abstract:

Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU.

Keywords: inertial measurement unit (IMU), global positioning system (GPS), smartphone, surfing performance

Procedia PDF Downloads 401
36320 Computational Fluid Dynamics Simulation Study of Flow near Moving Wall of Various Surface Types Using Moving Mesh Method

Authors: Khizir Mohd Ismail, Yu Jun Lim, Tshun Howe Yong

Abstract:

The study of flow behavior in an enclosed volume using Computational Fluid Dynamics (CFD) has been around for decades. However, due to the knowledge limitation of adaptive grid methods, the flow in an enclosed volume near the moving wall using CFD is less explored. A CFD simulation of flow in an enclosed volume near a moving wall was demonstrated and studied by introducing a moving mesh method and was modeled with Unsteady Reynolds-Averaged Navier-Stokes (URANS) approach. A static enclosed volume with controlled opening size in the bottom was positioned against a moving, translational wall with sliding mesh features. Controlled variables such as smoothed, crevices and corrugated wall characteristics, the distance between the enclosed volume to the wall and the moving wall speed against the enclosed chamber were varied to understand how the flow behaves and reacts in between these two geometries. These model simulations were validated against experimental results and provided result confidence when the simulation had shown good agreement with the experimental data. This study had provided better insight into the flow behaving in an enclosed volume when various wall types in motion were introduced within the various distance between each other and create a potential opportunity of application which involves adaptive grid methods in CFD.

Keywords: moving wall, adaptive grid methods, CFD, moving mesh method

Procedia PDF Downloads 147
36319 Consistent Testing for an Implication of Supermodular Dominance with an Application to Verifying the Effect of Geographic Knowledge Spillover

Authors: Chung Danbi, Linton Oliver, Whang Yoon-Jae

Abstract:

Supermodularity, or complementarity, is a popular concept in economics which can characterize many objective functions such as utility, social welfare, and production functions. Further, supermodular dominance captures a preference for greater interdependence among inputs of those functions, and it can be applied to examine which input set would produce higher expected utility, social welfare, or production. Therefore, we propose and justify a consistent testing for a useful implication of supermodular dominance. We also conduct Monte Carlo simulations to explore the finite sample performance of our test, with critical values obtained from the recentered bootstrap method, with and without the selective recentering, and the subsampling method. Under various parameter settings, we confirmed that our test has reasonably good size and power performance. Finally, we apply our test to compare the geographic and distant knowledge spillover in terms of their effects on social welfare using the National Bureau of Economic Research (NBER) patent data. We expect localized citing to supermodularly dominate distant citing if the geographic knowledge spillover engenders greater social welfare than distant knowledge spillover. Taking subgroups based on firm and patent characteristics, we found that there is industry-wise and patent subclass-wise difference in the pattern of supermodular dominance between localized and distant citing. We also compare the results from analyzing different time periods to see if the development of Internet and communication technology has changed the pattern of the dominance. In addition, to appropriately deal with the sparse nature of the data, we apply high-dimensional methods to efficiently select relevant data.

Keywords: supermodularity, supermodular dominance, stochastic dominance, Monte Carlo simulation, bootstrap, subsampling

Procedia PDF Downloads 129