Search results for: search algorithms
1147 Prioritizing the Most Important Information from Contractors’ BIM Handover for Firefighters’ Responsibilities
Authors: Akram Mahdaviparsa, Tamera McCuen, Vahideh Karimimansoob
Abstract:
Fire service is responsible for protecting life, assets, and natural resources from fire and other hazardous incidents. Search and rescue in unfamiliar buildings is a vital part of firefighters’ responsibilities. Providing firefighters with precise building information in an easy-to-understand format is a potential solution for mitigating the negative consequences of fire hazards. The negative effect of insufficient knowledge about a building’s indoor environment impedes firefighters’ capabilities and leads to lost property. A data rich building information modeling (BIM) is a potentially useful source in three-dimensional (3D) visualization and data/information storage for fire emergency response. Therefore, this research’s purpose is prioritizing the required information for firefighters from the most important information to the least important. A survey was carried out with firefighters working in the Norman Fire Department to obtain the importance of each building information item. The results show that “the location of exit doors, windows, corridors, elevators, and stairs”, “material of building elements”, and “building data” are the three most important information specified by firefighters. The results also implied that the 2D model of architectural, structural and way finding is more understandable in comparison with the 3D model, while the 3D model of MEP system could convey more information than the 2D model. Furthermore, color in visualization can help firefighters to understand the building information easier and quicker. Sufficient internal consistency of all responses was proven through developing the Pearson Correlation Matrix and obtaining Cronbach’s alpha of 0.916. Therefore, the results of this study are reliable and could be applied to the population.Keywords: BIM, building fire response, ranking, visualization
Procedia PDF Downloads 1321146 Feature Based Unsupervised Intrusion Detection
Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein
Abstract:
The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka
Procedia PDF Downloads 2941145 A Framework Based on Dempster-Shafer Theory of Evidence Algorithm for the Analysis of the TV-Viewers’ Behaviors
Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi
Abstract:
In this paper, we propose an approach of detecting the behavior of the viewers of a TV program in a non-controlled environment. The experiment we propose is based on the use of three types of connected objects (smartphone, smart watch, and a connected remote control). 23 participants were observed while watching their TV programs during three phases: before, during and after watching a TV program. Their behaviors were detected using an approach based on The Dempster Shafer Theory (DST) in two phases. The first phase is to approximate dynamically the mass functions using an approach based on the correlation coefficient. The second phase is to calculate the approximate mass functions. To approximate the mass functions, two approaches have been tested: the first approach was to divide each features data space into cells; each one has a specific probability distribution over the behaviors. The probability distributions were computed statistically (estimated by empirical distribution). The second approach was to predict the TV-viewing behaviors through the use of classifiers algorithms and add uncertainty to the prediction based on the uncertainty of the model. Results showed that mixing the fusion rule with the computation of the initial approximate mass functions using a classifier led to an overall of 96%, 95% and 96% success rate for the first, second and third TV-viewing phase respectively. The results were also compared to those found in the literature. This study aims to anticipate certain actions in order to maintain the attention of TV viewers towards the proposed TV programs with usual connected objects, taking into account the various uncertainties that can be generated.Keywords: Iot, TV-viewing behaviors identification, automatic classification, unconstrained environment
Procedia PDF Downloads 2281144 A Parking Demand Forecasting Method for Making Parking Policy in the Center of Kabul City
Authors: Roien Qiam, Shoshi Mizokami
Abstract:
Parking demand in the Central Business District (CBD) has enlarged with the increase of the number of private vehicles due to rapid economic growth, lack of an efficient public transport and traffic management system. This has resulted in low mobility, poor accessibility, serious congestion, high rates of traffic accident fatalities and injuries and air pollution, mainly because people have to drive slowly around to find a vacant spot. With parking pricing and enforcement policy, considerable advancement could be found, and on-street parking spaces could be managed efficiently and effectively. To evaluate parking demand and making parking policy, it is required to understand the current parking condition and driver’s behavior, understand how drivers choose their parking type and location as well as their behavior toward finding a vacant parking spot under parking charges and search times. This study illustrates the result from an observational, revealed and stated preference surveys and experiment. Attained data shows that there is a gap between supply and demand in parking and it has maximized. For the modeling of the parking decision, a choice model was constructed based on discrete choice modeling theory and multinomial logit model estimated by using SP survey data; the model represents the choice of an alternative among different alternatives which are priced on-street, off-street, and illegal parking. Individuals choose a parking type based on their preference concerning parking charges, searching times, access times and waiting times. The parking assignment model was obtained directly from behavioral model and is used in parking simulation. The study concludes with an evaluation of parking policy.Keywords: CBD, parking demand forecast, parking policy, parking choice model
Procedia PDF Downloads 1931143 Maternal and Neonatal Outcomes in Women Undergoing Bariatric Surgery: A Systematic Review and Meta-Analysis
Authors: Nicolas Galazis, Nikolina Docheva, Constantinos Simillis, Kypros Nicolaides
Abstract:
Background: Obese women are at increased risk for many pregnancy complications, and bariatric surgery (BS) before pregnancy has shown to improve some of these. Objectives: To review the current literature and quantitatively assess the obstetric and neonatal outcomes in pregnant women who have undergone BS. Search Strategy: MEDLINE, EMBASE and Cochrane databases were searched using relevant keywords to identify studies that reported on pregnancy outcomes after BS. Selection Criteria: Pregnancy outcome in firstly, women after BS compared to obese or BMI-matched women with no BS and secondly, women after BS compared to the same or different women before BS. Only observational studies were included. Data Collection and Analysis: Two investigators independently collected data on study characteristics and outcome measures of interest. These were analysed using the random effects model. Heterogeneity was assessed and sensitivity analysis was performed to account for publication bias. Main Results: The entry criteria were fulfilled by 17 non-randomised cohort or case-control studies, including seven with high methodological quality scores. In the BS group, compared to controls, there was a lower incidence of preeclampsia (OR, 0.45, 95% CI, 0.25-0.80; p=0.007), GDM (OR, 0.47, 95% CI, 0.40-0.56; P<0.001) and large neonates (OR 0.46, 95% CI 0.34-0.62; p<0.001) and a higher incidence of small neonates (OR 1.93, 95% CI 1.52-2.44; p<0.001), preterm birth (OR 1.31, 95% CI 1.08-1.58; p=0.006), admission for neonatal intensive care (OR 1.33, 95% CI 1.02-1.72; p=0.03) and maternal anaemia (OR 3.41, 95% CI 1.56-7.44, p=0.002). Conclusions: BS as a whole improves some pregnancy outcomes. Laparoscopic adjustable gastric banding does not appear to increase the rate of small neonates that was seen with other BS procedures. Obese women of childbearing age undergoing BS need to be aware of these outcomes.Keywords: bariatric surgery, pregnancy, preeclampsia, gestational diabetes, birth weight
Procedia PDF Downloads 4061142 Detection and Classification of Mammogram Images Using Principle Component Analysis and Lazy Classifiers
Authors: Rajkumar Kolangarakandy
Abstract:
Feature extraction and selection is the primary part of any mammogram classification algorithms. The choice of feature, attribute or measurements have an important influence in any classification system. Discrete Wavelet Transformation (DWT) coefficients are one of the prominent features for representing images in frequency domain. The features obtained after the decomposition of the mammogram images using wavelet transformations have higher dimension. Even though the features are higher in dimension, they were highly correlated and redundant in nature. The dimensionality reduction techniques play an important role in selecting the optimum number of features from the higher dimension data, which are highly correlated. PCA is a mathematical tool that reduces the dimensionality of the data while retaining most of the variation in the dataset. In this paper, a multilevel classification of mammogram images using reduced discrete wavelet transformation coefficients and lazy classifiers is proposed. The classification is accomplished in two different levels. In the first level, mammogram ROIs extracted from the dataset is classified as normal and abnormal types. In the second level, all the abnormal mammogram ROIs is classified into benign and malignant too. A further classification is also accomplished based on the variation in structure and intensity distribution of the images in the dataset. The Lazy classifiers called Kstar, IBL and LWL are used for classification. The classification results obtained with the reduced feature set is highly promising and the result is also compared with the performance obtained without dimension reduction.Keywords: PCA, wavelet transformation, lazy classifiers, Kstar, IBL, LWL
Procedia PDF Downloads 3331141 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis
Authors: C. B. Le, V. N. Pham
Abstract:
In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering
Procedia PDF Downloads 1881140 Mapping a Data Governance Framework to the Continuum of Care in the Active Assisted Living Context
Authors: Gaya Bin Noon, Thoko Hanjahanja-Phiri, Laura Xavier Fadrique, Plinio Pelegrini Morita, Hélène Vaillancourt, Jennifer Teague, Tania Donovska
Abstract:
Active Assisted Living (AAL) refers to systems designed to improve the quality of life, aid in independence, and create healthier lifestyles for care recipients. As the population ages, there is a pressing need for non-intrusive, continuous, adaptable, and reliable health monitoring tools to support aging in place. AAL has great potential to support these efforts with the wide variety of solutions currently available, but insufficient efforts have been made to address concerns arising from the integration of AAL into care. The purpose of this research was to (1) explore the integration of AAL technologies and data into the clinical pathway, and (2) map data access and governance for AAL technology in order to develop standards for use by policy-makers, technology manufacturers, and developers of smart communities for seniors. This was done through four successive research phases: (1) literature search to explore existing work in this area and identify lessons learned; (2) modeling of the continuum of care; (3) adapting a framework for data governance into the AAL context; and (4) interviews with stakeholders to explore the applicability of previous work. Opportunities for standards found in these research phases included a need for greater consistency in language and technology requirements, better role definition regarding who can access and who is responsible for taking action based on the gathered data, and understanding of the privacy-utility tradeoff inherent in using AAL technologies in care settings.Keywords: active assisted living, aging in place, internet of things, standards
Procedia PDF Downloads 1301139 Quantification and Identification of the Main Components of the Biomass of the Microalgae Scenedesmus SP. – Prospection of Molecules of Commercial Interest
Authors: Carolina V. Viegas, Monique Gonçalves, Gisel Chenard Diaz, Yordanka Reyes Cruz, Donato Alexandre Gomes Aranda
Abstract:
To develop the massive cultivation of microalgae, it is necessary to isolate and characterize the species, improving genetic tools in search of specific characteristics. Therefore, the detection, identification and quantification of the compounds that compose the Scenedesmus sp. were prerequisites to verify the potential of these microalgae. The main objective of this work was to carry out the characterization of Scenedesmus sp. as to the content of ash, carbohydrates, proteins and lipids as well as the determination of the composition of their lipid classes and main fatty acids. The biomass of Scenedesmus sp, showed 15,29 ± 0,23 % of ash and CaO (36,17 %) was the main component of this fraction, The total protein and carbohydrate content of the biomass was 40,74 ± 1,01 % and 23,37 ± 0,95 %, respectively, proving to be a potential source of proteins as well as carbohydrates for the production of ethanol via fermentation, The lipid contents extracted via Bligh & Dyer and in situ saponification were 8,18 ± 0,13 % and 4,11 ± 0,11 %, respectively. In the lipid extracts obtained via Bligh & Dyer, approximately 50 % of the composition of this fraction consists of fatty compounds, while the other half is composed of an unsaponifiable fraction composed mainly of chlorophylls, phytosterols and carotenes. From the lowest yield, it was possible to obtain a selectivity of 92,14 % for fatty components (fatty acids and fatty esters) confirmed through the infrared spectroscopy technique. The presence of polyunsaturated acids (~45 %) in the lipid extracts indicated the potential of this fraction as a source of nutraceuticals. The results indicate that the biomass of Scenedesmus sp, can become a promising potential source for obtaining polyunsaturated fatty acids, carotenoids and proteins as well as the simultaneous obtainment of different compounds of high commercial value.Keywords: microalgae, Desmodesmus, lipid classes, fatty acid profile, proteins, carbohydrates
Procedia PDF Downloads 951138 A Methodology for Automatic Diversification of Document Categories
Authors: Dasom Kim, Chen Liu, Myungsu Lim, Su-Hyeon Jeon, ByeoungKug Jeon, Kee-Young Kwahk, Namgyu Kim
Abstract:
Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we previously proposed a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. In this paper, we design a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.Keywords: big data analysis, document classification, multi-category, text mining, topic analysis
Procedia PDF Downloads 2711137 Using Monte Carlo Model for Simulation of Rented Housing in Mashhad, Iran
Authors: Mohammad Rahim Rahnama
Abstract:
The study employs Monte Carlo method for simulation of rented housing in Mashhad second largest city in Iran. A total number of 334 rental residential units in Mashhad, including both apartments and houses (villa), were randomly selected from advertisements placed in Khorasan Newspapers during the months of July and August of 2015. In order to simulate the monthly rent price, the rent index was calculated through combining the mortgage and the rent price. In the next step, the relation between the variables of the floor area and that of the number of bedrooms for each unit, in both apartments and houses(villa), was calculated through multivariate regression using SPSS and was coded in XML. The initial model was called using simulation button in SPSS and was simulated using triangular and binominal algorithms. The findings revealed that the average simulated rental index was 548.5$ per month. Calculating the sensitivity of rental index to a number of bedrooms we found that firstly, 97% of units have three bedrooms, and secondly as the number of bedrooms increases from one to three, for the rent price of less than 200$, the percentage of units having one bedroom decreases from 10% to 0. Contrariwise, for units with the rent price of more than 571.4$, the percentage of bedrooms increases from 37% to 48%. In the light of these findings, it becomes clear that planning to build rental residential units, overseeing the rent prices, and granting subsidies to rental residential units, for apartments with two bedrooms, present a felicitous policy for regulating residential units in Mashhad.Keywords: Mashhad, Monte Carlo, simulation, rent price, residential unit
Procedia PDF Downloads 2731136 Measurement of Solids Concentration in Hydrocyclone Using ERT: Validation Against CFD
Authors: Vakamalla Teja Reddy, Narasimha Mangadoddy
Abstract:
Hydrocyclones are used to separate particles into different size fractions in the mineral processing, chemical and metallurgical industries. High speed video imaging, Laser Doppler Anemometry (LDA), X-ray and Gamma ray tomography are previously used to measure the two-phase flow characteristics in the cyclone. However, investigation of solids flow characteristics inside the cyclone is often impeded by the nature of the process due to slurry opaqueness and solid metal wall vessels. In this work, a dual-plane high speed Electrical resistance tomography (ERT) is used to measure hydrocyclone internal flow dynamics in situ. Experiments are carried out in 3 inch hydrocyclone for feed solid concentrations varying in the range of 0-50%. ERT data analysis through the optimized FEM mesh size and reconstruction algorithms on air-core and solid concentration tomograms is assessed. Results are presented in terms of the air-core diameter and solids volume fraction contours using Maxwell’s equation for various hydrocyclone operational parameters. It is confirmed by ERT that the air core occupied area and wall solids conductivity levels decreases with increasing the feed solids concentration. Algebraic slip mixture based multi-phase computational fluid dynamics (CFD) model is used to predict the air-core size and the solid concentrations in the hydrocyclone. Validation of air-core size and mean solid volume fractions by ERT measurements with the CFD simulations is attempted.Keywords: air-core, electrical resistance tomography, hydrocyclone, multi-phase CFD
Procedia PDF Downloads 3781135 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data
Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad
Abstract:
Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.Keywords: remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction
Procedia PDF Downloads 3351134 Sustainable Renovation of Cultural Buildings Case Study: Red Bay National Historic Site, Canada
Authors: Richard Briginshaw, Hana Alaojeli, Javaria Ahmad, Hamza Gaffar, Nourtan Murad
Abstract:
Sustainable renovations to cultural buildings and sites require a high level of competency in the sometimes conflicting areas of social/historical demands, environmental concerns, and the programmatic and technical requirements of the project. A detailed analysis of the existing site, building and client program are critical to reveal both challenges and opportunities. This forms the starting point for the design process – empirical explorations that search for a balanced and inspired architectural solution to the project. The Red Bay National Historic Site on the Labrador Coast of eastern Canada is a challenging project to explore and resolve these ideas. Originally the site of a 16ᵗʰ century whaling station occupied by Basque sailors from France and Spain, visitors now experience this history at the interpretive center, along with the unique geography, climate, local culture and vernacular architecture of the area. Working with our client, Parks Canada, the project called for significant alterations and expansion to the existing facility due to an increase in the number of annual visitors. Sustainable aspects of the design are focused on sensitive site development, passive energy strategies such as building orientation and building envelope efficiency, active renewable energy systems, carefully considered material selections, water efficiency, and interiors that respond to human comfort and a unique visitor experience.Keywords: sustainability, renovations and expansion, cultural project, architectural design, green building
Procedia PDF Downloads 1671133 Monitoring the Drying and Grinding Process during Production of Celitement through a NIR-Spectroscopy Based Approach
Authors: Carolin Lutz, Jörg Matthes, Patrick Waibel, Ulrich Precht, Krassimir Garbev, Günter Beuchle, Uwe Schweike, Peter Stemmermann, Hubert B. Keller
Abstract:
Online measurement of the product quality is a challenging task in cement production, especially in the production of Celitement, a novel environmentally friendly hydraulic binder. The mineralogy and chemical composition of clinker in ordinary Portland cement production is measured by X-ray diffraction (XRD) and X ray fluorescence (XRF), where only crystalline constituents can be detected. But only a small part of the Celitement components can be measured via XRD, because most constituents have an amorphous structure. This paper describes the development of algorithms suitable for an on-line monitoring of the final processing step of Celitement based on NIR-data. For calibration intermediate products were dried at different temperatures and ground for variable durations. The products were analyzed using XRD and thermogravimetric analyses together with NIR-spectroscopy to investigate the dependency between the drying and the milling processes on one and the NIR-signal on the other side. As a result, different characteristic parameters have been defined. A short overview of the Celitement process and the challenging tasks of the online measurement and evaluation of the product quality will be presented. Subsequently, methods for systematic development of near-infrared calibration models and the determination of the final calibration model will be introduced. The application of the model on experimental data illustrates that NIR-spectroscopy allows for a quick and sufficiently exact determination of crucial process parameters.Keywords: calibration model, celitement, cementitious material, NIR spectroscopy
Procedia PDF Downloads 4991132 Comparative Policy Analysis on Agropolitan Territorial Development in Rural Area: A Study Case in Bojonegoro Regency, Indonesia
Authors: Fatihin Khoirul, Muhammad Muqorrobin Ist
Abstract:
Bojonegoro Regency is one of the districts that use the concept Agropolitan as the Territorial Development Policy. Three sub-district designated as Area Development District of Agropolitan are Kapas, Dander, and Kalitidu or commonly called KADEKA. Current policy has been shown results, but there was an inequality of results in some areas. One of them occurred in the Ngringinrejo village with the main commodities is Starfruit and Wedi village with the main commodities is Salak fruit. Therefore, a comparative study is used to search for causal factors of inequality result of the policy by using the 5 aspects compared, namely: (1) Management Development Agropolitan; (2) Physical Condition agropolitan Region; (3) Implementing Agency at the Village Level; (4) Village Government Support; and (5) Community support. Based on the discussion of qualitative analysis, it was found that five aspects have their respective roles in creating inequality of outcomes that occur in both villages. But beyond that, there are conditions where the two villages experienced the same condition that is when the initial implementation of the policy. The condition is referred to as 'the phenomenon of price trap.' The condition is caused by lower commodity prices, causing the village government's commitment in implementing policies too low, followed by public awareness in support of the policy is also low, so care for commodities is also low, and the quality is too low lead and eventually back causing low price. However, the difference is that the village Ngringinrejo able to get out of this condition with 'the new culture of administration' at the end of 2013. While the conditions in the village of Wedi compounded by not respected request assistance by the irrigation district.Keywords: comparative policy analysis, qualitative comparative, inequallity, price trap, new culture of administration
Procedia PDF Downloads 2861131 Algorithms for Run-Time Task Mapping in NoC-Based Heterogeneous MPSoCs
Authors: M. K. Benhaoua, A. K. Singh, A. E. Benyamina, P. Boulet
Abstract:
Mapping parallelized tasks of applications onto these MPSoCs can be done either at design time (static) or at run-time (dynamic). Static mapping strategies find the best placement of tasks at design-time, and hence, these are not suitable for dynamic workload and seem incapable of runtime resource management. The number of tasks or applications executing in MPSoC platform can exceed the available resources, requiring efficient run-time mapping strategies to meet these constraints. This paper describes a new Spiral Dynamic Task Mapping heuristic for mapping applications onto NoC-based Heterogeneous MPSoC. This heuristic is based on packing strategy and routing Algorithm proposed also in this paper. Heuristic try to map the tasks of an application in a clustering region to reduce the communication overhead between the communicating tasks. The heuristic proposed in this paper attempts to map the tasks of an application that are most related to each other in a spiral manner and to find the best possible path load that minimizes the communication overhead. In this context, we have realized a simulation environment for experimental evaluations to map applications with varying number of tasks onto an 8x8 NoC-based Heterogeneous MPSoCs platform, we demonstrate that the new mapping heuristics with the new modified dijkstra routing algorithm proposed are capable of reducing the total execution time and energy consumption of applications when compared to state-of-the-art run-time mapping heuristics reported in the literature.Keywords: multiprocessor system on chip, MPSoC, network on chip, NoC, heterogeneous architectures, run-time mapping heuristics, routing algorithm
Procedia PDF Downloads 4881130 A Phenomenological Study on the Role of Civil Society Organizations in Supporting Urban Refugees in Thailand
Authors: Rowena Clemino Alcoba
Abstract:
Thailand is host to the largest number of refugees in the region. The country has been one of the most accessible points of entry to refugees around the world because it has relatively lenient visa requirements, enabling asylum seekers to enter the country and subsequently search for legal assistance. However, because Thailand is not a signatory to the 1951 Geneva Convention on Refugees which governs the refugee status determination and safeguards several rights of the refugees, there are no national laws or administrative framework on the protection of refugees. Refugees are considered as illegal migrants, and certain groups are permitted to stay temporarily only upon executive discretion. Aside from the documented group of refugees from the Myanmar border, there are many others who came from different parts of the world. They are known as urban refugees believed to be in the thousands and are scattered in the impoverished areas of Bangkok and the suburbs. This study aims to advance understanding of the role of civil society organizations in supporting refugees, with particular focus on urban refugees. Using the method of triangulation in qualitative research, the study investigates the life journey of a refugee family from Pakistan, their difficulties and struggles to survive in perilous situations. The study presents the dynamics of how civil society works and collaborates to fill the gap for much-needed social services. It also discusses the depth and scope of the role of faith actors in the protection and support of this vulnerable sector. The engagement of civil society reveals framework and structure that aims to create long-term impact. The help provided is not merely monetary or material dole-outs but a platform for refugees to integrate with community, develop skills and make productive use of their time.Keywords: asylum seeker, civil society, faith actors, refugees
Procedia PDF Downloads 1461129 On the Qarat Kibrit Salt Dome Faulting System South of Adam, Oman: In Search of Uranium Anomalies
Authors: Alaeddin Ebrahimi, Narasimman Sundararajan, Bernhard Pracejus
Abstract:
Development of salt domes, often a rising from depths of some 10 km or more, causes an intense faulting of the surrounding host rocks (salt tectonics). The fractured rocks then present ideal space for oil that can migrate and get trapped. If such moving of hydrocarbons passes uranium-carrying rock units (e.g., shales), uranium is collected and enriched by organic carbon compounds. Brines from the salt body are also ideal carriers for oxidized uranium species and will further dislocate uranium when in contact with uranium-enriched oils. Uranium then has the potential to mineralize in the vicinity of the dome (blue halite is evidence for radiation having affected salt deposits elsewhere in the world). Based on this knowledge, the Qarat Kibrit salt dome was investigated by a well-established geophysical method like very low frequency electromagnetic (VLF-EM) along five traverses approximately 250 m in length (10 m intervals) in order to identify subsurface fault systems. In-phase and quadrature components of the VLF-EM signal were recorded at two different transmitter frequencies (24.0 and 24.9 kHz). The images of Fraser filtered response of the in-phase components indicate a conductive zone (fault) in the southeast and southwest of the study area. The Karous-Hjelt current density pseudo section delineates subsurface faults at depths between 10 and 40 m. The stacked profiles of the Fraser filtered responses brought out two plausible trends/directions of faults. However, there seems to be no evidence for uranium enrichment has been recorded in this area.Keywords: salt dome, uranium, fault, in-phase component, quadrature component, Fraser filter, Karous-Hjelt current density
Procedia PDF Downloads 2381128 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study
Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker
Abstract:
In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.Keywords: admissions, algorithms, cloud computing, differentiation, fog computing, levelling, machine learning
Procedia PDF Downloads 1411127 Use of Geosynthetics as Reinforcement Elements in Unpaved Tertiary Roads
Authors: Vivian A. Galindo, Maria C. Galvis, Jaime R. Obando, Alvaro Guarin
Abstract:
In Colombia, most of the roads of the national tertiary road network are unpaved roads with granular rolling surface. These are very important ways of guaranteeing the mobility of people, products, and inputs from the agricultural sector from the most remote areas to urban centers; however, it has not paid much attention to the search for alternatives to avoid the occurrence of deteriorations that occur shortly after its commissioning. In recent years, geosynthetics have been used satisfactorily to reinforce unpaved roads on soft soils, with geotextiles and geogrids being the most widely used. The interaction of the geogrid and the aggregate minimizes the lateral movement of the aggregate particles and increases the load capacity of the material, which leads to a better distribution of the vertical stresses, consequently reducing the vertical deformations in the subgrade. Taking into account the above, the research aimed at the mechanical behavior of the granular material, used in unpaved roads with and without the presence of geogrids, from the development of laboratory tests through the loaded wheel tester (LWT). For comparison purposes, the reinforced conditions and traffic conditions to which this type of material can be accessed in practice were simulated. In total four types of geogrids, were tested with granular material; this means that five test sets, the reinforced material and the non-reinforced control sample were evaluated. The results of the numbers of load cycles and depth rutting supported by each test body showed the influence of the properties of the reinforcement on the mechanical behavior of the assembly and the significant increases in the number of load cycles of the reinforced specimens in relation to those without reinforcement.Keywords: geosynthetics, load wheel tester LWT, tertiary roads, unpaved road, vertical deformation
Procedia PDF Downloads 2481126 Comparative Study of Antimicrobial Activity of Bacteriocin Producing Lactic Acid Bacteria from Fermented Batter of Green Gram And Bengal Gram Against Food-Borne Pathogens
Authors: Bandi Aruna
Abstract:
The increase of multidrug-resistant pathogens and the restriction on the use of antibiotics due to its side effects have drawn attention to the search for possible alternatives. Bacteriocins are ribosomally synthesized antimicrobial peptides that are active against Gram-positive and Gram-negative bacteria. The bacteriocins from lactic acid bacteria represent an important application of these peptides as clinical drugs or as food biopreservatives. The present study describes the isolation of bacteriocin producing lactic acid bacteria (LAB) from fermented batter of green gram and bengal gram using Man, Rogosa and Sharpe (MRS) media. The bacteriocin produced by these organisms inhibited the growth of Staphylococcus aureus, Escherichia coli, Klebsiella species, Pseudomonas aeruginosa, The isolates G1, G2 were isolated from green gram; B1 and B2 were isolated from fermented bengal gram batter. G1 and G2 were identified as Lactobacillus casie and B1 and B2 were identified as Streptococcus species. Antimicrobial activity of the bacteriocin produced by these strains was studied by agar well diffusion method. Bacteriocins produced by the Lactobacillus casie and Streptococcus secies retained their antagonistic property at pH of 5 and pH of 7. Exposure of bacteriocin to UV light for 4 min showed antibacterial activity. The antagonistic property was observed even at 100°C demonstrating stability at higher temperatures of the bacteriocin. The bacteriocins were stable for a period of 15 days at 27°C. The bacteriocins of G1, G2, and B2 exhibited highest antagonistic activity at pH of 5 and B1 at pH of 7. Therefore, the bacteriocins of the isolates may find important application in controlling the food-borne pathogens.Keywords: Keywords: Antibacterial activity, Lactic acid bacteria, Bacteriocin
Procedia PDF Downloads 4001125 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling
Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo
Abstract:
Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield
Procedia PDF Downloads 4461124 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling
Authors: Florin Leon, Silvia Curteanu
Abstract:
Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression
Procedia PDF Downloads 3021123 Factors Affecting the Caregiving Experience of Children with Parental Mental Illnesses: A Systematic Review
Authors: N. Anjana
Abstract:
Worldwide, the prevalence of mental illnesses is increasing. The issues of persons with mental illness and their caregivers have been well documented in the literature. However, data regarding the factors affecting the caregiving experience of children with parental mental illnesses is sparse. This systematic review aimed to examine the existing literature of the factors affecting the caregiving experience of children of parents with mental illnesses. A comprehensive search of databases such as PubMed, EBSCO, JSTOR, ProQuest Central, Taylor and Francis Online, and Google Scholar were performed to identify peer-reviewed papers examining the factors associated with caregiving experiences of children with parental mental illnesses such as schizophrenia and major depression, for the 10-year period ending November 2019. Two researchers screened studies for eligibility. One researcher extracted data from eligible studies while a second performed verification of results for accuracy and completeness. Quality appraisal was conducted by both reviewers. Data describing major factors associated with caregiving experiences of children with parental mental illnesses were synthesized and reported in narrative form. Five studies were considered eligible and included in this review. Findings are organized under major themes such as the impact of parental mental illness on children’s daily life, how children provide care to their mentally ill parents as primary carers, social and relationship factors associated with their caregiving, positive and negative experiences in caregiving and how children cope with their experiences with parental mental illnesses. Literature relating to the caregiving experiences of children with parental mental illnesses is sparse. More research is required to better understand the children’s caregiving experiences related to parental mental illnesses so as to better inform management for enhancing their mental health, wellbeing, and caregiving practice.Keywords: caregiving experience, children, parental mental illnesses, wellbeing
Procedia PDF Downloads 1401122 In Search of Sustainable Science Education at the Basic Level of Education in Ghana: The Unintended Consequences of Enacting Science Curriculum Reforms in Junior High Schools
Authors: Charles Deodat Otami
Abstract:
This paper documents an ongoing investigation which seeks to explore the consequences of repeated science curriculum reforms at basic level of education in Ghana. Drawing upon data collected through document analysis, semi-structured interviews and classroom observations linked with a study of teaching practices in Junior High Schools of educational districts that are well served with teachers and yet, produce poor students’ achievements in science in the national Basic Education Certificate Examinations. The results emanating from the investigation highlight that the repeated science curriculum reforms at the basic level of education have led to the displacement of scientific knowledge in junior high schools in Ghana, a very critical level of education where the foundation for further science education to the highest level is laid. Furthermore, the results indicate that the enactment of centralised curriculum reforms in Ghana has produced some unpleasant repercussions. For instance, how the teachers interpret and implement the curriculum is directly related to their own values and practices as well as students feedback. This is contrary to the perception that external impetus received from donor agencies holds the key to strengthening reforms made. Thus, it is argued that without the right of localised management, curriculum reforms themselves are inadequate to ensure the realisation of the desired effects. This paper, therefore, draws the attention of stakeholders to the fact that the enactment of School Science Curriculum reform goes beyond just simple implementation to more complex dynamics which may change the original reform intents.Keywords: basic education, basic education certificate examinations, curriculum reforms, junior high school, educational districts, teaching practices
Procedia PDF Downloads 2641121 The Response of the Accumulated Biomass and the Efficiency of Water Use in Five Varieties of Durum Wheat Lines under Water Stress
Authors: Fellah Sihem
Abstract:
The optimal use of soil moisture by culture, is related to the leaf area index, which stood in the cycle and its modulation according to the prevailing stress intensity. For a given stock of water in the soil, cultivar adapted and saving water is one that is no luxury consumption during the preanthesis. It modulates the leaf area index to regulate sweating in the degree of its water supply. In plants water saving, avoidance of dehydration is related to the reduction of water loss by cuticular and stomatal pathways. Muchow and Sinclair reported that the test of relative water content (TRE) is considered the best indicator of leaf water status. The search for indicators of the ability of the plant to make good use of the water, under water stress is a prerequisite for progress in improving performance under water stress. This experiment aims to characterize a set of durum wheat varieties, tested jars and vegetation under different levels of water stress to the surface of the leaf, relative water content, cell integrity, the accumulated biomass and efficiency of water use. The experiment was conducted during the 2005/2006 academic year, at the Agricultural Research Station of the Field Crop Institute of Setif, under semi-controlled conditions. Five genotypes of durum wheat (Triticum durum Desf) were evaluated for their ability to tolerate moderate and severe water stress. The results showed that geno types respond differently to water stress. Dry matter accumulation and growth rate varied among geno types and were significantly reduced. At severe water stress biomass accumulated by Boussalam was the least affected.Keywords: water stress, triticum durum, biomass, cell membrane integrity, relative water content
Procedia PDF Downloads 4671120 Carbohydrate-Based Recommendations as a Basis for Dietary Guidelines
Authors: A. E. Buyken, D. J. Mela, P. Dussort, I. T. Johnson, I. A. Macdonald, A. Piekarz, J. D. Stowell, F. Brouns
Abstract:
Recently a number of renewed dietary guidelines have been published by various health authorities. The aim of the present work was 1) to review the processes (systematic approach/review, inclusion of public consultation) and methodological approaches used to identify and select the underpinning evidence base for the established recommendations for total carbohydrate (CHO), fiber and sugar consumption, and 2) examine how differences in the methods and processes applied may have influenced the final recommendations. A search of WHO, US, Canada, Australia and European sources identified 13 authoritative dietary guidelines with the desired detailed information. Each of these guidelines was evaluated for its scientific basis (types and grading of the evidence) and the processes by which the guidelines were developed Based on the data retrieved the following conclusions can be drawn: 1) Generally, a relatively high total CHO and fiber intake and limited intake of sugars (added or free) is recommended. 2) Even where recommendations are quite similar, the specific, justifications for quantitative/qualitative recommendations differ across authorities. 3) Differences appear to be due to inconsistencies in underlying definitions of CHO exposure and in the concurrent appraisal of CHO-providing foods and nutrients as well the choice and number of health outcomes selected for the evidence appraisal. 4) Differences in the selected articles, time frames or data aggregation method appeared to be of rather minor influence. From this assessment, the main recommendations are for: 1) more explicit quantitative justifications for numerical guidelines and communication of uncertainty; and 2) greater international harmonization, particularly with regard to underlying definitions of exposures and range of relevant nutrition-related outcomes.Keywords: carbohydrates, dietary fibres, dietary guidelines, recommendations, sugars
Procedia PDF Downloads 2571119 Influence of Pine Wood Ash as Pozzolanic Material on Compressive Strength of a Concrete
Authors: M. I. Nicolas, J. C. Cruz, Ysmael Verde, A.Yeladaqui-Tello
Abstract:
The manufacture of Portland cement has revolutionized the construction industry since the nineteenth century; however, the high cost and large amount of energy required on its manufacturing encouraged, from the seventies, the search of alternative materials to replace it partially or completely. Among the materials studied to replace the cement are the ashes. In the city of Chetumal, south of the Yucatan Peninsula in Mexico, there are no natural sources of pozzolanic ash. In the present study, the cementitious properties of artificial ash resulting from the combustion of waste pine wood were analyzed. The ash obtained was sieved through the screen and No.200 a fraction was analyzed using the technique of X-ray diffraction; with the aim of identifying the crystalline phases and particle sizes of pozzolanic material by the Debye-Scherrer equation. From the characterization of materials, mixtures for a concrete of f'c = 250 kg / cm2 were designed with the method ACI 211.1; for the pattern mixture and for partial replacements of Portland cement by 5%, 10% and 12% pine wood ash mixture. Simple resistance to axial compression of specimens prepared with each concrete mixture, at 3, 14 and 28 days of curing was evaluated. Pozzolanic activity was observed in the ash obtained, checking the presence of crystalline silica (SiO2 of 40.24 nm) and alumina (Al2O3 of 35.08 nm). At 28 days of curing, the specimens prepared with a 5% ash, reached a compression resistance 63% higher than design; for specimens with 10% ash, was 45%; and for specimens with 12% ash, only 36%. Compared to Pattern mixture, which after 28 days showed a f'c = 423.13 kg/cm2, the specimens reached only 97%, 86% and 82% of the compression resistance, for mixtures containing 5%, 10% ash and 12% respectively. The pozzolanic activity of pine wood ash influences the compression resistance, which indicates that it can replace up to 12% of Portland cement by ash without compromising its design strength, however, there is a decrease in strength compared to the pattern concrete.Keywords: concrete, pine wood ash, pozzolanic activity, X-ray
Procedia PDF Downloads 4541118 Does Creatine Supplementation Improve Swimming Performance?
Authors: Catrin Morgan, Atholl Johnston
Abstract:
Creatine supplementation should theoretically increase total muscle creatine and so enhance the generation of intramuscular phosphocreatine and subsequent ATP formation. The use of creatine as a potential ergogenic aid in sport has been an area of significant scientific research for a number of years. However the effect of creatine supplementation and swimming performance is a relatively new area of research and is the subject of this review. In swimming creatine supplementation could help maintain maximal power output, aid recovery and increase lean body mass. After investigating the underlying theory and science behind creatine supplementation, a literature review was conducted to identify the best evidence looking at the effect of creatine supplementation on swimming performance. The search identified 27 potential studies, and of these 17 were selected for review. The studies were then categorised into single sprint performance, which involves swimming a short distance race, or repeated interval performance, which involves swimming a series of sprints with intervals of rest between them. None of the studies on the effect of creatine controlled for the multiple confounding factors associated with measurement of swimming performance. The sample size in the studies was limited and this reduced the reliability of the studies and introduced the possibility of bias. The studies reviewed provided insufficient evidence to determine if creatine supplementation is beneficial to swimming performance. However, what data there was supported the use of creatine supplementation in repeated interval swimming rather than in single sprint swimming. From a review of the studies, it was calculated on average, there was a 1.37% increase in swimming performance with the use of creatine for repeated intervals and a 0.86% increase in performance for single sprint. While this may seem minor, it should be remembered that swimming races are often won by much smaller margins. In the 2012 London Olympics the Men’s 100 metres freestyle race was won by a margin of only 0.01 of a second. Therefore any potential benefit could make a dramatic difference to the final outcome of the race. Overall more research is warranted before the benefits of creatine supplementation in swimming performance can be further clarified.Keywords: creatine supplementation, repeated interval, single sprint, swimming performance
Procedia PDF Downloads 425