Search results for: dispersed region growing algorithm (DRGA)
10409 ACO-TS: an ACO-based Algorithm for Optimizing Cloud Task Scheduling
Authors: Fahad Y. Al-dawish
Abstract:
The current trend by a large number of organizations and individuals to use cloud computing. Many consider it a significant shift in the field of computing. Cloud computing are distributed and parallel systems consisting of a collection of interconnected physical and virtual machines. With increasing request and profit of cloud computing infrastructure, diverse computing processes can be executed on cloud environment. Many organizations and individuals around the world depend on the cloud computing environments infrastructure to carry their applications, platform, and infrastructure. One of the major and essential issues in this environment related to allocating incoming tasks to suitable virtual machine (cloud task scheduling). Cloud task scheduling is classified as optimization problem, and there are several meta-heuristic algorithms have been anticipated to solve and optimize this problem. Good task scheduler should execute its scheduling technique on altering environment and the types of incoming task set. In this research project a cloud task scheduling methodology based on ant colony optimization ACO algorithm, we call it ACO-TS Ant Colony Optimization for Task Scheduling has been proposed and compared with different scheduling algorithms (Random, First Come First Serve FCFS, and Fastest Processor to the Largest Task First FPLTF). Ant Colony Optimization (ACO) is random optimization search method that will be used for assigning incoming tasks to available virtual machines VMs. The main role of proposed algorithm is to minimizing the makespan of certain tasks set and maximizing resource utilization by balance the load among virtual machines. The proposed scheduling algorithm was evaluated by using Cloudsim toolkit framework. Finally after analyzing and evaluating the performance of experimental results we find that the proposed algorithm ACO-TS perform better than Random, FCFS, and FPLTF algorithms in each of the makespaan and resource utilization.Keywords: cloud Task scheduling, ant colony optimization (ACO), cloudsim, cloud computing
Procedia PDF Downloads 42010408 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce
Authors: Jiao Sun, Li Pan, Shijun Liu
Abstract:
Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.Keywords: collaborative filtering, recommendation, data normalization, mapreduce
Procedia PDF Downloads 21710407 Architectural Knowledge Systems Related to Use of Terracotta in Bengal
Authors: Nandini Mukhopadhyay
Abstract:
The prominence of terracotta as a building material in Bengal is well justified by its geographical location. The architectural knowledge system associated with terracotta can be comprehended in the typology of the built structures as they act as texts to interpret the knowledge. The history of Bengal has witnessed the influence of several rulers in developing the architectural vocabulary of the region. This metamorphosis of the architectural knowledge systems in the region includes the Bhakti movement, the Islamic influence, and the British rule, which led to the evolution of the use of terracotta from decorative elements to structural elements in the present-day context. This paper intends to develop an understanding of terracotta as a building material, its use in a built structure, the common problems associated with terracotta construction, and the techniques of maintenance, repair, and conservation. This paper also explores the size, shape, and geometry of the material and its varied use in temples, mosques in the region. It also takes into note that the use of terracotta was concentrated majorly to religious structures and not in the settlements of the common people. And the architectural style of temples and mosques of Bengal is hugely influenced by the houses of the common.Keywords: terracotta, material, knowledge system, conservation
Procedia PDF Downloads 14910406 Pattern and Trend of Open Burning Occurrence in Greater Mekong Sub-Region Countries: Case Study Thailand, Laos, and Myanmar
Authors: Nion Sirimongkonlertkun, Vivard Phonekeo
Abstract:
This research focused on open burning occurrence in Greater Mekong Sub-Region countries that influences the increase of PM10concentrations. Thailand, Myanmar, and Laos were chosen as a case study, and 2009, 2010, and 2012 were chosen as the year for case study. Hotspot detected by MODIS (Moderate Resolution Imaging Specto radiometer) sensor on board of Terra/Aqua satellites and provided by Rapid Response System was used to represent open burning location in the region. Hotspot was selected through fire confidence with confidence levels of 80-100%. The spatial analysis by GIS was used as the main tool for analyzing and defining the location of open burning at study sites as hotspot with the pixel size of 1 km by 1 km. The total hotspot counts in the study period of four years (2007, 2009, 2010, and January-April 2012) at the regional level, including Thailand, Laos, and Myanmar were 255,177 hotspots or a very high yearly average of 63,795 hotspots. The highest amount was seen in Myanmar (50%), followed by Laos (36%), and Thailand (14%). For Thailand, the majority of burning or 64% occurred in the northern region with the density of 5 hotspots per 100 km2. According to statistics of the 4 years, the increasing rate of hotspot from January to February was 10 times and from February to March was 4 times. After that period, the hotspot started to decline by 2 times from March to April. Therefore, in order to develop a policy which aims to lessen open burning conduction, the government should seriously focus on this problem during the peak period—February to March in every year when hotspot and open burning area is significantly increased.Keywords: PM10, hotspot, greater mekong sub-region, open burning
Procedia PDF Downloads 36010405 Spatial Data Mining by Decision Trees
Authors: Sihem Oujdi, Hafida Belbachir
Abstract:
Existing methods of data mining cannot be applied on spatial data because they require spatial specificity consideration, as spatial relationships. This paper focuses on the classification with decision trees, which are one of the data mining techniques. We propose an extension of the C4.5 algorithm for spatial data, based on two different approaches Join materialization and Querying on the fly the different tables. Similar works have been done on these two main approaches, the first - Join materialization - favors the processing time in spite of memory space, whereas the second - Querying on the fly different tables- promotes memory space despite of the processing time. The modified C4.5 algorithm requires three entries tables: a target table, a neighbor table, and a spatial index join that contains the possible spatial relationship among the objects in the target table and those in the neighbor table. Thus, the proposed algorithms are applied to a spatial data pattern in the accidentology domain. A comparative study of our approach with other works of classification by spatial decision trees will be detailed.Keywords: C4.5 algorithm, decision trees, S-CART, spatial data mining
Procedia PDF Downloads 61210404 Synthesis of TiO₂/Graphene Nanocomposites with Excellent Visible-Light Photocatalytic Activity Based on Chemical Exfoliation Method
Authors: Nhan N. T. Ton, Anh T. N. Dao, Kouichirou Katou, Toshiaki Taniike
Abstract:
Facile electron-hole recombination and the broad band gap are two major drawbacks of titanium dioxide (TiO₂) when applied in visible-light photocatalysis. Hybridization of TiO₂ with graphene is a promising strategy to lessen these pitfalls. Recently, there have been many reports on the synthesis of TiO₂/graphene nanocomposites, in most of which graphene oxide (GO) was used as a starting material. However, the reduction of GO introduced a large number of defects on the graphene framework. In addition, the sensitivity of titanium alkoxide to water (GO usually contains) significantly obstructs the uniform and controlled growth of TiO₂ on graphene. Here, we demonstrate a novel technique to synthesize TiO₂/graphene nanocomposites without the use of GO. Graphene dispersion was obtained through the chemical exfoliation of graphite in titanium tetra-n-butoxide with the aid of ultrasonication. The dispersion was directly used for the sol-gel reaction in the presence of different catalysts. A TiO₂/reduced graphene oxide (TiO₂/rGO) nanocomposite, which was prepared by a solvothermal method from GO, and the commercial TiO₂-P25 were used as references. It was found that titanium alkoxide afforded the graphene dispersion of a high quality in terms of a trace amount of defects and a few layers of dispersed graphene. Moreover, the sol-gel reaction from this dispersion led to TiO₂/graphene nanocomposites featured with promising characteristics for visible-light photocatalysts including: (I) the formation of a TiO₂ nano layer (thickness ranging from 1 nm to 5 nm) that uniformly and thinly covered graphene sheets, (II) a trace amount of defects on the graphene framework (low ID/IG ratio: 0.21), (III) a significant extension of the absorption edge into the visible light region (a remarkable extension of the absorption edge to 578 nm beside the usual edge at 360 nm), and (IV) a dramatic suppression of electron-hole recombination (the lowest photoluminescence intensity compared to reference samples). These advantages were successfully demonstrated in the photocatalytic decomposition of methylene blue under visible light irradiation. The TiO₂/graphene nanocomposites exhibited 15 and 5 times higher activity than TiO₂-P25 and the TiO₂/rGO nanocomposite, respectively.Keywords: chemical exfoliation, photocatalyst, TiO₂/graphene, sol-gel reaction
Procedia PDF Downloads 16010403 Cluster Based Ant Colony Routing Algorithm for Mobile Ad-Hoc Networks
Authors: Alaa Eddien Abdallah, Bajes Yousef Alskarnah
Abstract:
Ant colony based routing algorithms are known to grantee the packet delivery, but they suffer from the huge overhead of control messages which are needed to discover the route. In this paper we utilize the network nodes positions to group the nodes in connected clusters. We use clusters-heads only on forwarding the route discovery control messages. Our simulations proved that the new algorithm has decreased the overhead dramatically without affecting the delivery rate.Keywords: ad-hoc network, MANET, ant colony routing, position based routing
Procedia PDF Downloads 42510402 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring
Authors: Daniel Fundi Murithi
Abstract:
Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring
Procedia PDF Downloads 16310401 PID Sliding Mode Control with Sliding Surface Dynamics based Continuous Control Action for Robotic Systems
Authors: Wael M. Elawady, Mohamed F. Asar, Amany M. Sarhan
Abstract:
This paper adopts a continuous sliding mode control scheme for trajectory tracking control of robot manipulators with structured and unstructured uncertain dynamics and external disturbances. In this algorithm, the equivalent control in the conventional sliding mode control is replaced by a PID control action. Moreover, the discontinuous switching control signal is replaced by a continuous proportional-integral (PI) control term such that the implementation of the proposed control algorithm does not require the prior knowledge of the bounds of unknown uncertainties and external disturbances and completely eliminates the chattering phenomenon of the conventional sliding mode control approach. The closed-loop system with the adopted control algorithm has been proved to be globally stable by using Lyapunov stability theory. Numerical simulations using the dynamical model of robot manipulators with modeling uncertainties demonstrate the superiority and effectiveness of the proposed approach in high speed trajectory tracking problems.Keywords: PID, robot, sliding mode control, uncertainties
Procedia PDF Downloads 50810400 FlexPoints: Efficient Algorithm for Detection of Electrocardiogram Characteristic Points
Authors: Daniel Bulanda, Janusz A. Starzyk, Adrian Horzyk
Abstract:
The electrocardiogram (ECG) is one of the most commonly used medical tests, essential for correct diagnosis and treatment of the patient. While ECG devices generate a huge amount of data, only a small part of them carries valuable medical information. To deal with this problem, many compression algorithms and filters have been developed over the past years. However, the rapid development of new machine learning techniques poses new challenges. To address this class of problems, we created the FlexPoints algorithm that searches for characteristic points on the ECG signal and ignores all other points that do not carry relevant medical information. The conducted experiments proved that the presented algorithm can significantly reduce the number of data points which represents ECG signal without losing valuable medical information. These sparse but essential characteristic points (flex points) can be a perfect input for some modern machine learning models, which works much better using flex points as an input instead of raw data or data compressed by many popular algorithms.Keywords: characteristic points, electrocardiogram, ECG, machine learning, signal compression
Procedia PDF Downloads 16210399 Development of Electronic Waste Management Framework at College of Design Art, Design and Technology
Authors: Wafula Simon Peter, Kimuli Nabayego Ibtihal, Nabaggala Kimuli Nashua
Abstract:
The worldwide use of information and communications technology (ICT) equipment and other electronic equipment is growing and consequently, there is a growing amount of equipment that becomes waste after its time in use. This growth is expected to accelerate since equipment lifetime decreases with time and growing consumption. As a result, e-waste is one of the fastest-growing waste streams globally. The United Nations University (UNU) calculates in its second Global E-waste Monitor 44.7 million metric tonnes (Mt) of e-waste were generated globally in 2016. The study population was 80 respondents, from which a sample of 69 respondents was selected using simple and purposive sampling techniques. This research was carried out to investigate the problem of e-waste and come up with a framework to improve e-waste management. The objective of the study was to develop a framework for improving e-waste management at the College of Engineering, Design, Art and Technology (CEDAT). This was achieved by breaking it down into specific objectives, and these included the establishment of the policy and other Regulatory frameworks being used in e-waste management at CEDAT, the determination of the effectiveness of the e-waste management practices at CEDAT, the establishment of the critical challenges constraining e-waste management at the College, development of a framework for e-waste management. The study reviewed the e-waste regulatory framework used at the college and then collected data which was used to come up with a framework. The study also established that weak policy and regulatory framework, lack of proper infrastructure, improper disposal of e-waste and a general lack of awareness of the e-waste and the magnitude of the problem are the critical challenges of e-waste management. In conclusion, the policy and regulatory framework should be revised, localized and strengthened to contextually address the problem. Awareness campaigns, the development of proper infrastructure and extensive research to establish the volumes and magnitude of the problems will come in handy. The study recommends a framework for the improvement of e-waste.Keywords: e-waste, treatment, disposal, computers, model, management policy and guidelines
Procedia PDF Downloads 7910398 Awning: An Unsung Trait in Rice (Oryza Sativa L.)
Authors: Chamin Chimyang
Abstract:
The fast-changing global trend and declining forest region have impacted agricultural lands; animals, especially birds, might become one of the major pests in the near future and go neglected or unreported in many kinds of literature and events, which is mainly because of bird infestation being a pocket-zone problem. This bird infestation can be attributed to the balding of the forest region and the decline in their foraging hotspot due to anthropogenic activity. There are many ways to keep away the birds from agricultural fields, both conventional and non-conventional. But the question here is whether the traditional approach of bird scarring methods such as scare-crows are effective enough. There are many traits in rice that are supposed to keep the birds away from foraging in paddy fields, and the selection of such traits might be rewarding, such as the angle of the flag leaf from the stem, grain size, novelty of any trait in that particular region and also an awning. Awning, as such, is a very particular trait on which negative selection was imposed to such an extent that there has been a decline in the nucleotide responsible for the said trait. Thus, in this particular session, histology, genetics, genes behind the trait and how awns might be one of the solutions to the problem stated above will be discussed in detail.Keywords: bird infestation, awning, negative selection, domestication
Procedia PDF Downloads 2510397 Pavement Maintenance and Rehabilitation Scheduling Using Genetic Algorithm Based Multi Objective Optimization Technique
Authors: Ashwini Gowda K. S, Archana M. R, Anjaneyappa V
Abstract:
This paper presents pavement maintenance and management system (PMMS) to obtain optimum pavement maintenance and rehabilitation strategies and maintenance scheduling for a network using a multi-objective genetic algorithm (MOGA). Optimal pavement maintenance & rehabilitation strategy is to maximize the pavement condition index of the road section in a network with minimum maintenance and rehabilitation cost during the planning period. In this paper, NSGA-II is applied to perform maintenance optimization; this maintenance approach was expected to preserve and improve the existing condition of the highway network in a cost-effective way. The proposed PMMS is applied to a network that assessed pavement based on the pavement condition index (PCI). The minimum and maximum maintenance cost for a planning period of 20 years obtained from the non-dominated solution was found to be 5.190x10¹⁰ ₹ and 4.81x10¹⁰ ₹, respectively.Keywords: genetic algorithm, maintenance and rehabilitation, optimization technique, pavement condition index
Procedia PDF Downloads 14910396 Predicting Depth of Penetration in Abrasive Waterjet Cutting of Polycrystalline Ceramics
Authors: S. Srinivas, N. Ramesh Babu
Abstract:
This paper presents a model to predict the depth of penetration in polycrystalline ceramic material cut by abrasive waterjet. The proposed model considered the interaction of cylindrical jet with target material in upper region and neglected the role of threshold velocity in lower region. The results predicted with the proposed model are validated with the experimental results obtained with Silicon Carbide (SiC) blocks.Keywords: abrasive waterjet cutting, analytical modeling, ceramics, micro-cutting and inter-grannular cracking
Procedia PDF Downloads 30510395 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor
Authors: Feng Tao, Han Ye, Shaoyi Liao
Abstract:
City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI
Procedia PDF Downloads 30010394 Dielectrophoretic Characterization of Tin Oxide Nanowires for Biotechnology Application
Authors: Ahmad Sabry Mohamad, Kai F. Hoettges, Michael Pycraft Hughes
Abstract:
This study investigates nanowires using Dielectrophoresis (DEP) in non-aqueous suspension of Tin (IV) Oxide (SnO2) nanoparticles dispersed in N,N-dimenthylformamide (DMF). The self assembly of nanowires in DEP impedance spectroscopy can be determined. In this work, dielectrophoretic method was used to measure non-organic molecules for estimating the permittivity and conductivity characteristic of the nanowires. As in aqueous such as salt solution has been dominating the transport of SnO2, which are the wire growth threshold, depend on applied voltage. While DEP assembly of nanowires depend on applied frequency, the applications of dielectrophoretic collection are measured using impedance spectroscopy.Keywords: dielectrophoresis, impedance spectroscopy, nanowires, N, N-dimenthylformamide, SnO2
Procedia PDF Downloads 65910393 Improvement of the Robust Proportional–Integral–Derivative (PID) Controller Parameters for Controlling the Frequency in the Intelligent Multi-Zone System at the Present of Wind Generation Using the Seeker Optimization Algorithm
Authors: Roya Ahmadi Ahangar, Hamid Madadyari
Abstract:
The seeker optimization algorithm (SOA) is increasingly gaining popularity among the researchers society due to its effectiveness in solving some real-world optimization problems. This paper provides the load-frequency control method based on the SOA for removing oscillations in the power system. A three-zone power system includes a thermal zone, a hydraulic zone and a wind zone equipped with robust proportional-integral-differential (PID) controllers. The result of simulation indicates that load-frequency changes in the wind zone for the multi-zone system are damped in a short period of time. Meanwhile, in the oscillation period, the oscillations amplitude is not significant. The result of simulation emphasizes that the PID controller designed using the seeker optimization algorithm has a robust function and a better performance for oscillations damping compared to the traditional PID controller. The proposed controller’s performance has been compared to the performance of PID controller regulated with Particle Swarm Optimization (PSO) and. Genetic Algorithm (GA) and Artificial Bee Colony (ABC) algorithms in order to show the superior capability of the proposed SOA in regulating the PID controller. The simulation results emphasize the better performance of the optimized PID controller based on SOA compared to the PID controller optimized with PSO, GA and ABC algorithms.Keywords: load-frequency control, multi zone, robust PID controller, wind generation
Procedia PDF Downloads 30310392 Molecular-Genetics Studies of New Unknown APMV Isolated from Wild Bird in Ukraine
Authors: Borys Stegniy, Anton Gerilovych, Oleksii Solodiankin, Vitaliy Bolotin, Anton Stegniy, Denys Muzyka, Claudio Afonso
Abstract:
New APMV was isolated from white fronted goose in Ukraine. This isolate was tested serologically using monoclonal antibodies in haemagglutination-inhibition tests against APMV1-9. As the results obtained isolate showed cross reactions with APMV7. Following investigations were provided for the full genome sequencing using random primers and cloning into pCRII-TOPO. Analysis of 100 transformed colonies of E.coli using traditional sequencing gave us possibilities to find only 3 regions, which could identify by BLAST. The first region with the length of 367 bp had 70 % nucleotide sequence identity to the APMV 12 isolate Wigeon/Italy/3920_1/2005 at genome position 2419-2784. Next region (344 bp) had 66 % identity to the same APMV 12 isolate at position 4760-5103. The last region (365 bp) showed 71 % identity to Newcastle disease virus strain M4 at position 12569-12928.Keywords: APMV, Newcastle disease virus, Ukraine, full genome sequencing
Procedia PDF Downloads 44210391 Modeling Average Paths Traveled by Ferry Vessels Using AIS Data
Authors: Devin Simmons
Abstract:
At the USDOT’s Bureau of Transportation Statistics, a biannual census of ferry operators in the U.S. is conducted, with results such as route mileage used to determine federal funding levels for operators. AIS data allows for the possibility of using GIS software and geographical methods to confirm operator-reported mileage for individual ferry routes. As part of the USDOT’s work on the ferry census, an algorithm was developed that uses AIS data for ferry vessels in conjunction with known ferry terminal locations to model the average route travelled for use as both a cartographic product and confirmation of operator-reported mileage. AIS data from each vessel is first analyzed to determine individual journeys based on the vessel’s velocity, and changes in velocity over time. These trips are then converted to geographic linestring objects. Using the terminal locations, the algorithm then determines whether the trip represented a known ferry route. Given a large enough dataset, routes will be represented by multiple trip linestrings, which are then filtered by DBSCAN spatial clustering to remove outliers. Finally, these remaining trips are ready to be averaged into one route. The algorithm interpolates the point on each trip linestring that represents the start point. From these start points, a centroid is calculated, and the first point of the average route is determined. Each trip is interpolated again to find the point that represents one percent of the journey’s completion, and the centroid of those points is used as the next point in the average route, and so on until 100 points have been calculated. Routes created using this algorithm have shown demonstrable improvement over previous methods, which included the implementation of a LOESS model. Additionally, the algorithm greatly reduces the amount of manual digitizing needed to visualize ferry activity.Keywords: ferry vessels, transportation, modeling, AIS data
Procedia PDF Downloads 17610390 Mental Health Literacy in Ghana: Consequences of Religiosity, Education, and Stigmatization
Authors: Peter Adu
Abstract:
Although research on the concept of Mental Health Literacy (MHL) is growing internationally, to the authors’ best of knowledge, the beliefs and knowledge of Ghanaians on specific mental disorders have not yet been explored. This vignette study was conducted to explore the relationships between religiosity, education, stigmatization, and MHL among Ghanaians using a sample of laypeople (N = 409). The adapted questionnaire presented two vignettes (depression and schizophrenia) about a hypothetical person. The results revealed that more participants were able to recognize depression (47.4%) than schizophrenia (15.9%). Religiosity was not significantly associated with recognition of mental disorders (MHL) but was positively related with both social and personal stigma for depression and negatively associated with personal and perceived stigma for schizophrenia. Moreover, education was found to relate positively with MHL and negatively with perceived stigma. Finally, perceived stigma was positively associated with MHL, whereas personal stigma for schizophrenia related negatively to MHL. In conclusion, education but not religiosity predicted identification accuracy, but both predictors were associated with various forms of stigma. Findings from this study have implications for MHL and anti-stigma campaigns in Ghana and other developing countries in the region.Keywords: depression, education, mental health literacy, religiosity, schizophrenia
Procedia PDF Downloads 15710389 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kumar Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform
Procedia PDF Downloads 11510388 Protein-Enrichment of Oilseed Meals by Triboelectrostatic Separation
Authors: Javier Perez-Vaquero, Katryn Junker, Volker Lammers, Petra Foerst
Abstract:
There is increasing importance to accelerate the transition to sustainable food systems by including environmentally friendly technologies. Our work focuses on protein enrichment and fractionation of agricultural side streams by dry triboelectrostatic separation technology. Materials are fed in particulate form into a system dispersed in a highly turbulent gas stream, whereby the high collision rate of particles against surfaces and other particles greatly enhances the electrostatic charge build-up over the particle surface. A subsequent step takes the charged particles to a delimited zone in the system where there is a highly uniform, intense electric field applied. Because the charge polarity acquired by a particle is influenced by its chemical composition, morphology, and structure, the protein-rich and fiber-rich particles of the starting material get opposite charge polarities, thus following different paths as they move through the region where the electric field is present. The output is two material fractions, which differ in their respective protein content. One is a fiber-rich, low-protein fraction, while the other is a high-protein, low-fiber composition. Prior to testing, materials undergo a milling process, and some samples are stored under controlled humidity conditions. In this way, the influence of both particle size and humidity content was established. We used two oilseed meals: lupine and rapeseed. In addition to a lab-scale separator to perform the experiments, the triboelectric separation process could be successfully scaled up to a mid-scale belt separator, increasing the mass feed from g/sec to kg/hour. The triboelectrostatic separation technology opens a huge potential for the exploitation of so far underutilized alternative protein sources. Agricultural side-streams from cereal and oil production, which are generated in high volumes by the industries, can further be valorized by this process.Keywords: bench-scale processing, dry separation, protein-enrichment, triboelectrostatic separation
Procedia PDF Downloads 19010387 Shopping Centers in the Context of a Growing and Changing City: The Case of Konya Kent Plaza
Authors: H. Derya Arslan
Abstract:
Shopping centers have become an important part of urban life. The numbers of shopping centers have rapidly increased for ten years, in Turkey. Malls that have been built with increasing speed in the last two decades meet most social and cultural needs of people. In this study, architectural characteristics of a recent mall built in the city of Konya in Turkey have been discussed. The assessment of the mall in question has been made in the context of a growing and changing city. The study opened up new horizons and discussion areas to entrepreneurs who make significant investments in shopping centers, architects who design shopping centers as efficient commercial and social environments, and social scientists that investigate the effects of increase in these closed urban spaces on urban life.Keywords: shopping center, architecture, city, social
Procedia PDF Downloads 33610386 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 15810385 Overview and Post Damage Analysis of Nepal Earthquake 2015
Authors: Vipin Kumar Singhal, Rohit Kumar Mittal, Pavitra Ranjan Maiti
Abstract:
Damage analysis is one of the preliminary activities to be done after an earthquake so as to enhance the seismic building design technologies and prevent similar type of failure in future during earthquakes. This research article investigates the damage pattern and most probable reason of failure by observing photographs of seven major buildings collapsed/damaged which were evenly spread over the region during Mw7.8, Nepal earthquake 2015 followed by more than 400 aftershocks of Mw4 with one aftershock reaching a magnitude of Mw7.3. Over 250,000 buildings got damaged, and more than 9000 people got injured in this earthquake. Photographs of these buildings were collected after the earthquake and the cause of failure was estimated along with the severity of damage and comment on the reparability of structure has been made. Based on observations, it was concluded that the damage in reinforced concrete buildings was less compared to masonry structures. The number of buildings damaged was high near Kathmandu region due to high building density in that region. This type of damage analysis can be used as a cost effective and quick method for damage assessment during earthquakes.Keywords: Nepal earthquake, damage analysis, damage assessment, damage scales
Procedia PDF Downloads 37410384 Transportation Accidents Mortality Modeling in Thailand
Authors: W. Sriwattanapongse, S. Prasitwattanaseree, S. Wongtrangan
Abstract:
The transportation accidents mortality is a major problem that leads to loss of human lives, and economic. The objective was to identify patterns of statistical modeling for estimating mortality rates due to transportation accidents in Thailand by using data from 2000 to 2009. The data was taken from the death certificate, vital registration database. The number of deaths and mortality rates were computed classifying by gender, age, year and region. There were 114,790 cases of transportation accidents deaths. The highest average age-specific transport accident mortality rate is 3.11 per 100,000 per year in males, Southern region and the lowest average age-specific transport accident mortality rate is 1.79 per 100,000 per year in females, North-East region. Linear, poisson and negative binomial models were chosen for fitting statistical model. Among the models fitted, the best was chosen based on the analysis of deviance and AIC. The negative binomial model was clearly appropriate fitted.Keywords: transportation accidents, mortality, modeling, analysis of deviance
Procedia PDF Downloads 24410383 Evaluation of Heat of Hydration and Strength Development in Natural Pozzolan-Incorporated Cement from the Gulf Region
Authors: S. Al-Fadala, J. Chakkamalayath, S. Al-Bahar, A. Al-Aibani, S. Ahmed
Abstract:
Globally, the use of pozzolan in blended cement is gaining great interest due to the desirable effect of pozzolan from the environmental and energy conservation standpoint and the technical benefits they provide to the performance of cement. The deterioration of concrete structures in the marine environment and extreme climates demand the use of pozzolana cement in concrete construction in the Gulf region. Also, natural sources of cement clinker materials are limited in the Gulf region, and cement industry imports the raw materials for the production of Portland cement, resulting in an increase in the greenhouse gas effect due to the CO₂ emissions generated from transportation. Even though the Gulf region has vast deposits of natural pozzolana, it is not explored properly for the production of high performance concrete. Hence, an optimum use of regionally available natural pozzolana for the production of blended cement can result in sustainable construction. This paper investigates the effect of incorporating natural pozzolan sourced from the Gulf region on the performance of blended cement in terms of heat evolution and strength development. For this purpose, a locally produced Ordinary Portland Cement (OPC) and pozzolan-incorporated blended cements containing different amounts of natural pozzolan (volcanic ash) were prepared on laboratory scale. The strength development and heat evolution were measured and quantified. Promising results of strength development were obtained for blends with the percentages of Volcanic Ash (VA) replacement varying from 10 to 30%. Results showed that the heat of hydration decreased with increase in percentage of replacement of OPC with VA, indicating increased retardation in hydration due to the addition of VA. This property could be used in mass concreting in which a reduction in heat of hydration is required to reduce cracking in concrete, especially in hot weather concreting.Keywords: blended cement, hot weather, hydration, volcanic ash
Procedia PDF Downloads 32510382 A Protein-Wave Alignment Tool for Frequency Related Homologies Identification in Polypeptide Sequences
Authors: Victor Prevost, Solene Landerneau, Michel Duhamel, Joel Sternheimer, Olivier Gallet, Pedro Ferrandiz, Marwa Mokni
Abstract:
The search for homologous proteins is one of the ongoing challenges in biology and bioinformatics. Traditionally, a pair of proteins is thought to be homologous when they originate from the same ancestral protein. In such a case, their sequences share similarities, and advanced scientific research effort is spent to investigate this question. On this basis, we propose the Protein-Wave Alignment Tool (”P-WAT”) developed within the framework of the France Relance 2030 plan. Our work takes into consideration the mass-related wave aspect of protein biosynthesis, by associating specific frequencies to each amino acid according to its mass. Amino acids are then regrouped within their mass category. This way, our algorithm produces specific alignments in addition to those obtained with a common amino acid coding system. For this purpose, we develop the ”P-WAT” original algorithm, able to address large protein databases, with different attributes such as species, protein names, etc. that allow us to align user’s requests with a set of specific protein sequences. The primary intent of this algorithm is to achieve efficient alignments, in this specific conceptual frame, by minimizing execution costs and information loss. Our algorithm identifies sequence similarities by searching for matches of sub-sequences of different sizes, referred to as primers. Our algorithm relies on Boolean operations upon a dot plot matrix to identify primer amino acids common to both proteins which are likely to be part of a significant alignment of peptides. From those primers, dynamic programming-like traceback operations generate alignments and alignment scores based on an adjusted PAM250 matrix.Keywords: protein, alignment, homologous, Genodic
Procedia PDF Downloads 11310381 A New Design Methodology for Partially Reconfigurable Systems-on-Chip
Authors: Roukaya Dalbouchi, Abdelkrin Zitouni
Abstract:
In this paper, we propose a novel design methodology for Dynamic Partial Reconfigurable (DPR) system. This type of system has the property of being able to be modified after its design and during its execution. The suggested design methodology is generic in terms of granularity, number of modules, and reconfigurable region and suitable for any type of modern application. It is based on the interconnection between several design stages. The recommended methodology represents a guide for the design of DPR architectures that meet compromise reconfiguration/performance. To validate the proposed methodology, we use as an application a video watermarking. The comparison result shows that the proposed methodology supports all stages of DPR architecture design and characterized by a high abstraction level. It provides a dynamic/partial reconfigurable architecture; it guarantees material efficiency, the flexibility of reconfiguration, and superior performance in terms of frequency and power consumption.Keywords: dynamically reconfigurable system, block matching algorithm, partial reconfiguration, motion vectors, video watermarking
Procedia PDF Downloads 9510380 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle
Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar
Abstract:
As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the Central Processing Unit (CPU), operational (RAM), and permanent (ROM) memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles
Procedia PDF Downloads 111