Search results for: Minimum data set
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26245

Search results for: Minimum data set

25705 Statistical Quality Control on Assignable Causes of Variation on Cement Production in Ashaka Cement PLC Gombe State

Authors: Hamisu Idi

Abstract:

The present study focuses on studying the impact of influencer recommendation in the quality of cement production. Exploratory research was done on monthly basis, where data were obtained from secondary source i.e. the record kept by an automated recompilation machine. The machine keeps all the records of the mills downtime which the process manager checks for validation and refer the fault (if any) to the department responsible for maintenance or measurement taking so as to prevent future occurrence. The findings indicated that the product of the Ashaka Cement Plc. were considered as qualitative, since all the production processes were found to be in control (preset specifications) with the exception of the natural cause of variation which is normal in the production process as it will not affect the outcome of the product. It is reduced to the bearest minimum since it cannot be totally eliminated. It is also hopeful that the findings of this study would be of great assistance to the management of Ashaka cement factory and the process manager in particular at various levels in the monitoring and implementation of statistical process control. This study is therefore of great contribution to the knowledge in this regard and it is hopeful that it would open more research in that direction.

Keywords: cement, quality, variation, assignable cause, common cause

Procedia PDF Downloads 258
25704 The Practice of Integrating Sustainable Elements into the Housing Industry in Malaysia

Authors: Wong Kean Hin, Kumarason A. L. V. Rasiah

Abstract:

A building provides shelter and protection for an individual to live, work, sleep, procreate or engage in leisurely activities comfortably. Currently, a very popular term related to building was often stated by many parties, which is sustainability. A sustainable building is environmental friendly, healthy to the occupants, as well as efficient in electricity and water. This particular research is important to any parties that are involved in the construction industry. This research will provide the awareness and acceptability of Malaysian public towards sustainable residential building. It will also provide the developers about which sustainable features that the people usually want so that the developers can build a sustainable housing that suits the needs of people. Then, propose solutions to solve the difficulties of implementing sustainability in Malaysian housing industry. Qualitative and quantitative research methods were used throughout the process of data collection. The quantitative research method was distribution of questionnaires to 100 Malaysian public and 50 individuals that worked in developer companies. Then, the qualitative method was an interview session with experienced personnel in Malaysian construction industry. From the data collected, there is increasingly Malaysian public and developers are aware about the existence of sustainability. Moreover, the public is willing to invest on sustainable residential building with minimum additional cost. However, there is a mismatch in between sustainable elements provided by developers and the public needs. Some recommendations to improve the progression of sustainability had been proposed in this study, which include laws enforcement, cooperation between the both government sector with private sector, and private sector with private sector, and learn from modern countries. These information will be helpful and useful for the future of sustainability development in Malaysia.

Keywords: acceptability, awareness, Malaysian housing industry, sustainable elements, green building index

Procedia PDF Downloads 365
25703 Improved K-Means Clustering Algorithm Using RHadoop with Combiner

Authors: Ji Eun Shin, Dong Hoon Lim

Abstract:

Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases.

Keywords: big data, combiner, K-means clustering, RHadoop

Procedia PDF Downloads 431
25702 Framework for Integrating Big Data and Thick Data: Understanding Customers Better

Authors: Nikita Valluri, Vatcharaporn Esichaikul

Abstract:

With the popularity of data-driven decision making on the rise, this study focuses on providing an alternative outlook towards the process of decision-making. Combining quantitative and qualitative methods rooted in the social sciences, an integrated framework is presented with a focus on delivering a much more robust and efficient approach towards the concept of data-driven decision-making with respect to not only Big data but also 'Thick data', a new form of qualitative data. In support of this, an example from the retail sector has been illustrated where the framework is put into action to yield insights and leverage business intelligence. An interpretive approach to analyze findings from both kinds of quantitative and qualitative data has been used to glean insights. Using traditional Point-of-sale data as well as an understanding of customer psychographics and preferences, techniques of data mining along with qualitative methods (such as grounded theory, ethnomethodology, etc.) are applied. This study’s final goal is to establish the framework as a basis for providing a holistic solution encompassing both the Big and Thick aspects of any business need. The proposed framework is a modified enhancement in lieu of traditional data-driven decision-making approach, which is mainly dependent on quantitative data for decision-making.

Keywords: big data, customer behavior, customer experience, data mining, qualitative methods, quantitative methods, thick data

Procedia PDF Downloads 157
25701 Simulation of Antimicrobial Resistance Gene Fate in Narrow Grass Hedges

Authors: Marzieh Khedmati, Shannon L. Bartelt-Hunt

Abstract:

Vegetative Filter Strips (VFS) are used for controlling the volume of runoff and decreasing contaminant concentrations in runoff before entering water bodies. Many studies have investigated the role of VFS in sediment and nutrient removal, but little is known about their efficiency for the removal of emerging contaminants such as antimicrobial resistance genes (ARGs). Vegetative Filter Strip Modeling System (VFSMOD) was used to simulate the efficiency of VFS in this regard. Several studies demonstrated the ability of VFSMOD to predict reductions in runoff volume and sediment concentration moving through the filters. The objectives of this study were to calibrate the VFSMOD with experimental data and assess the efficiency of the model in simulating the filter behavior in removing ARGs (ermB) and tylosin. The experimental data were obtained from a prior study conducted at the University of Nebraska (UNL) Rogers Memorial Farm. Three treatment factors were tested in the experiments, including manure amendment, narrow grass hedges and rainfall events. Sediment Delivery Ratio (SDR) was defined as the filter efficiency and the related experimental and model values were compared to each other. The VFS Model generally agreed with the experimental results and as a result, the model was used for predicting filter efficiencies when the runoff data are not available. Narrow Grass Hedges (NGH) were shown to be effective in reducing tylosin and ARGs concentration. The simulation showed that the filter efficiency in removing ARGs is different for different soil types and filter lengths. There is an optimum length for the filter strip that produces minimum runoff volume. Based on the model results increasing the length of the filter by 1-meter leads to higher efficiency but widening beyond that decreases the efficiency. The VFSMOD, which was proved to work well in estimation of VFS trapping efficiency, showed confirming results for ARG removal.

Keywords: antimicrobial resistance genes, emerging contaminants, narrow grass hedges, vegetative filter strips, vegetative filter strip modeling system

Procedia PDF Downloads 130
25700 Investigation into the Suitability of Aggregates for Use in Superpave Design Method

Authors: Ahmad Idris, Armaya`u Suleiman Labo, Ado Yusuf Abdulfatah, Murtala Umar

Abstract:

Super pave is the short form of Superior Performing Asphalt Pavement and represents a basis for specifying component materials, asphalt mixture design and analysis, and pavement performance prediction. This new technology is the result of long research projects conducted by the strategic Highway Research program (SHRP) of the Federal Highway Administration. This research was aimed at examining the suitability of Aggregates found in Kano for used in super pave design method. Aggregates samples were collected from different sources in Kano Nigeria and their Engineering properties, as they relate to the SUPERPAVE design requirements were determined. The average result of Coarse Aggregate Angularity in Kano was found to be 87% and 86% of one fractured face and two or more fractured faces respectively with a standard of 80% and 85% respectively. Fine Aggregate Angularity average result was found to be 47% with a requirement of 45% minimum. A flat and elongated particle which was found to be 10% has a maximum criterion of 10%. Sand equivalent was found to be 51% with the criteria of 45% minimum. Strength tests were also carried out, and the results reflect the requirements of the standards. The tests include Impact value test, Aggregate crushing value and Aggregate Abrasion tests and the results are 27.5%, 26.7% and 13% respectively with a maximum criteria of 30%. Specific gravity was also carried out and the result was found to have an average value of 2.52 with a criterion of 2.6 to 2.9 and Water absorption was found to be 1.41% with maximum criteria of 0.6%. From the study, the result of the tests indicated that the aggregates properties have met the requirements of Super pave design method based on the specifications of ASTMD 5821, ASTM D 4791, AASHTO T176, AASHTO T33 and BS815.

Keywords: aggregates, construction, road design, super pave

Procedia PDF Downloads 236
25699 Incremental Learning of Independent Topic Analysis

Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda

Abstract:

In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.

Keywords: text mining, topic extraction, independent, incremental, independent component analysis

Procedia PDF Downloads 305
25698 Open Data for e-Governance: Case Study of Bangladesh

Authors: Sami Kabir, Sadek Hossain Khoka

Abstract:

Open Government Data (OGD) refers to all data produced by government which are accessible in reusable way by common people with access to Internet and at free of cost. In line with “Digital Bangladesh” vision of Bangladesh government, the concept of open data has been gaining momentum in the country. Opening all government data in digital and customizable format from single platform can enhance e-governance which will make government more transparent to the people. This paper presents a well-in-progress case study on OGD portal by Bangladesh Government in order to link decentralized data. The initiative is intended to facilitate e-service towards citizens through this one-stop web portal. The paper further discusses ways of collecting data in digital format from relevant agencies with a view to making it publicly available through this single point of access. Further, possible layout of this web portal is presented.

Keywords: e-governance, one-stop web portal, open government data, reusable data, web of data

Procedia PDF Downloads 349
25697 About Some Results of the Determination of Alcohol in Moroccan Gasoline-Alcohol Mixtures

Authors: Mahacine Amrani

Abstract:

A simple and rapid method for the determination of alcohol in gasoline-alcohol mixtures using density measurements is described. The method can determine a minimum of 1% of alcohol by volume. The precision of the method is ± 3%.The method is more useful for field test in the quality assessment of alcohol blended fuels.

Keywords: gasoline-alcohol, mixture, alcohol determination, density, measurement, Morocco

Procedia PDF Downloads 318
25696 Undoped and Fluorine Doped Zinc Oxide (ZnO:F) Thin Films Deposited by Ultrasonic Chemical Spray: Effect of the Solution on the Electrical and Optical Properties

Authors: E. Chávez-Vargas, M. de la L. Olvera-Amador, A. Jimenez-Gonzalez, A. Maldonado

Abstract:

Undoped and fluorine doped zinc oxide (ZnO) thin films were deposited on sodocalcic glass substrates by the ultrasonic chemical spray technique. As the main goal is the manufacturing of transparent electrodes, the effects of both the solution composition and the substrate temperature on both the electrical and optical properties of ZnO thin films were studied. As a matter of fact, the effect of fluorine concentration ([F]/[F+Zn] at. %), solvent composition (acetic acid, water, methanol ratios) and ageing time, regarding solution composition, were varied. In addition, the substrate temperature and the deposition time, regarding the chemical spray technique, were also varied. Structural studies confirm the deposition of polycrystalline, hexagonal, wurtzite type, ZnO. The results show that the increase of ([F]/[F+Zn] at. %) ratio in the solution, decreases the sheet resistance, RS, of the ZnO:F films, reaching a minimum, in the order of 1.6 Ωcm, at 60 at. %; further increase in the ([F]/[F+Zn]) ratio increases the RS of the films. The same trend occurs with the variation in substrate temperature, as a minimum RS of ZnO:F thin films was encountered when deposited at TS= 450 °C. ZnO:F thin films deposited with aged solution show a significant decrease in the RS in the order of 100 ΩS. The transmittance of the films was also favorable affected by the solvent ratio and, more significantly, by the ageing of the solution. The whole evaluation of optical and electrical characteristics of the ZnO:F thin films deposited under different conditions, was done under Haacke’s figure of Merit in order to have a clear and quantitative trend as transparent conductors application.

Keywords: zinc oxide, ZnO:F, TCO, Haacke’s figure of Merit

Procedia PDF Downloads 309
25695 Selection of Appropriate Classification Technique for Lithological Mapping of Gali Jagir Area, Pakistan

Authors: Khunsa Fatima, Umar K. Khattak, Allah Bakhsh Kausar

Abstract:

Satellite images interpretation and analysis assist geologists by providing valuable information about geology and minerals of an area to be surveyed. A test site in Fatejang of district Attock has been studied using Landsat ETM+ and ASTER satellite images for lithological mapping. Five different supervised image classification techniques namely maximum likelihood, parallelepiped, minimum distance to mean, mahalanobis distance and spectral angle mapper have been performed on both satellite data images to find out the suitable classification technique for lithological mapping in the study area. Results of these five image classification techniques were compared with the geological map produced by Geological Survey of Pakistan. The result of maximum likelihood classification technique applied on ASTER satellite image has the highest correlation of 0.66 with the geological map. Field observations and XRD spectra of field samples also verified the results. A lithological map was then prepared based on the maximum likelihood classification of ASTER satellite image.

Keywords: ASTER, Landsat-ETM+, satellite, image classification

Procedia PDF Downloads 391
25694 Wheeled Robot Stable Braking Process under Asymmetric Traction Coefficients

Authors: Boguslaw Schreyer

Abstract:

During the wheeled robot’s braking process, the extra dynamic vertical forces act on all wheels: left, right, front or rear. Those forces are directed downward on the front wheels while directed upward on the rear wheels. In order to maximize the deceleration, therefore, minimize the braking time and braking distance, we need to calculate a correct torque distribution: the front braking torque should be increased, and rear torque should be decreased. At the same time, we need to provide better transversal stability. In a simple case of all adhesion coefficients being the same under all wheels, the torque distribution may secure the optimal (maximal) control of the robot braking process, securing the minimum braking distance and a minimum braking time. At the same time, the transversal stability is relatively good. At any time, we control the transversal acceleration. In the case of the transversal movement, we stop the braking process and re-apply braking torque after a defined period of time. If we correctly calculate the value of the torques, we may secure the traction coefficient under the front and rear wheels close to its maximum. Also, in order to provide an optimum braking control, we need to calculate the timing of the braking torque application and the timing of its release. The braking torques should be released shortly after the wheels passed a maximum traction coefficient (while a wheels’ slip increases) and applied again after the wheels pass a maximum of traction coefficient (while the slip decreases). The correct braking torque distribution secures the front and rear wheels, passing this maximum at the same time. It guarantees an optimum deceleration control, therefore, minimum braking time. In order to calculate a correct torque distribution, a control unit should receive the input signals of a rear torque value (which changes independently), the robot’s deceleration, and values of the vertical front and rear forces. In order to calculate the timing of torque application and torque release, more signals are needed: speed of the robot: angular speed, and angular deceleration of the wheels. In case of different adhesion coefficients under the left and right wheels, but the same under each pair of wheels- the same under right wheels and the same under left wheels, the Select-Low (SL) and select high (SH) methods are applied. The SL method is suggested if transversal stability is more important than braking efficiency. Often in the case of the robot, more important is braking efficiency; therefore, the SH method is applied with some control of the transversal stability. In the case that all adhesion coefficients are different under all wheels, the front-rear torque distribution is maintained as in all previous cases. However, the timing of the braking torque application and release is controlled by the rear wheels’ lowest adhesion coefficient. The Lagrange equations have been used to describe robot dynamics. Matlab has been used in order to simulate the process of wheeled robot braking, and in conclusion, the braking methods have been selected.

Keywords: wheeled robots, braking, traction coefficient, asymmetric

Procedia PDF Downloads 160
25693 Resource Framework Descriptors for Interestingness in Data

Authors: C. B. Abhilash, Kavi Mahesh

Abstract:

Human beings are the most advanced species on earth; it's all because of the ability to communicate and share information via human language. In today's world, a huge amount of data is available on the web in text format. This has also resulted in the generation of big data in structured and unstructured formats. In general, the data is in the textual form, which is highly unstructured. To get insights and actionable content from this data, we need to incorporate the concepts of text mining and natural language processing. In our study, we mainly focus on Interesting data through which interesting facts are generated for the knowledge base. The approach is to derive the analytics from the text via the application of natural language processing. Using semantic web Resource framework descriptors (RDF), we generate the triple from the given data and derive the interesting patterns. The methodology also illustrates data integration using the RDF for reliable, interesting patterns.

Keywords: RDF, interestingness, knowledge base, semantic data

Procedia PDF Downloads 159
25692 Effect of Inventory Management on Financial Performance: Evidence from Nigerian Conglomerate Companies

Authors: Adamu Danlami Ahmed

Abstract:

Inventory management is the determinant of effective and efficient work for any manager. This study looked at the relationship between inventory management and financial performance. The population of the study comprises all conglomerate quoted companies in the Nigerian Stock Exchange market as at 31st December 2010. The scope of the study covered the period from 2010 to 2014. Descriptive, Pearson correlation and multiple regressions are used to analyze the data. It was found that inventory management is significantly related to the profitability of the company. This entails that an efficient management of the inventory cycle will enhance the profitability of the company. Also, lack of proper management of it will hinder the financial performance of organizations. Based on the results, it was recommended that a conglomerate company should try to see that inventories are kept to a minimum, as well as make sure the proper checks are maintained to make sure only needed inventories are in the store. As well as to keep track of the movement of goods, in order to avoid unnecessary delay of finished and work in progress (WIP) goods in the store and warehouse.

Keywords: finished goods, work in progress, financial performance, inventory

Procedia PDF Downloads 230
25691 Data Mining Practices: Practical Studies on the Telecommunication Companies in Jordan

Authors: Dina Ahmad Alkhodary

Abstract:

This study aimed to investigate the practices of Data Mining on the telecommunication companies in Jordan, from the viewpoint of the respondents. In order to achieve the goal of the study, and test the validity of hypotheses, the researcher has designed a questionnaire to collect data from managers and staff members from main department in the researched companies. The results shows improvements stages of the telecommunications companies towered Data Mining.

Keywords: data, mining, development, business

Procedia PDF Downloads 490
25690 Assessment of Genetic Diversity and Population Structure of Goldstripe Sardinella, Sardinella gibbosa in the Transboundary Area of Kenya and Tanzania Using mtDNA and msDNA Markers

Authors: Sammy Kibor, Filip Huyghe, Marc Kochzius, James Kairo

Abstract:

Goldstripe Sardinella, Sardinella gibbosa, (Bleeker, 1849) is a commercially and ecologically important small pelagic fish common in the Western Indian Ocean region. The present study aimed to assess genetic diversity and population structure of the species in the Kenya-Tanzania transboundary area using mtDNA and msDNA markers. Some 630 bp sequence in the mitochondrial DNA (mtDNA) Cytochrome C Oxidase I (COI) and five polymorphic microsatellite DNA loci were analyzed. Fin clips of 309 individuals from eight locations within the transboundary area were collected between July and December 2018. The S. gibbosa individuals from the different locations were distinguishable from one another based on the mtDNA variation, as demonstrated with a neighbor-joining tree and minimum spanning network analysis. None of the identified 22 haplotypes were shared between Kenya and Tanzania. Gene diversity per locus was relatively high (0.271-0.751), highest Fis was 0.391. The structure analysis, discriminant analysis of Principal component (DAPC) and the pair-wise (FST = 0.136 P < 0.001) values after Bonferroni correction using five microsatellite loci provided clear inference on genetic differentiation and thus evidence of population structure of S. gibbosa along the Kenya-Tanzania coast. This study shows a high level of genetic diversity and the presence of population structure (Φst =0.078 P < 0.001) resulting to the existence of four populations giving a clear indication of minimum gene flow among the population. This information has application in the designing of marine protected areas, an important tool for marine conservation.

Keywords: marine connectivity, microsatellites, population genetics, transboundary

Procedia PDF Downloads 119
25689 The Impact of System and Data Quality on Organizational Success in the Kingdom of Bahrain

Authors: Amal M. Alrayes

Abstract:

Data and system quality play a central role in organizational success, and the quality of any existing information system has a major influence on the effectiveness of overall system performance.Given the importance of system and data quality to an organization, it is relevant to highlight their importance on organizational performance in the Kingdom of Bahrain. This research aims to discover whether system quality and data quality are related, and to study the impact of system and data quality on organizational success. A theoretical model based on previous research is used to show the relationship between data and system quality, and organizational impact. We hypothesize, first, that system quality is positively associated with organizational impact, secondly that system quality is positively associated with data quality, and finally that data quality is positively associated with organizational impact. A questionnaire was conducted among public and private organizations in the Kingdom of Bahrain. The results show that there is a strong association between data and system quality, that affects organizational success.

Keywords: data quality, performance, system quality, Kingdom of Bahrain

Procedia PDF Downloads 489
25688 Cutting Plane Methods for Integer Programming: NAZ Cut and Its Variations

Authors: A. Bari

Abstract:

Integer programming is a branch of mathematical programming techniques in operations research in which some or all of the variables are required to be integer valued. Various cuts have been used to solve these problems. We have also developed cuts known as NAZ cut & A-T cut to solve the integer programming problems. These cuts are used to reduce the feasible region and then reaching the optimal solution in minimum number of steps.

Keywords: Integer Programming, NAZ cut, A-T cut, Cutting plane method

Procedia PDF Downloads 359
25687 Cloud Computing in Data Mining: A Technical Survey

Authors: Ghaemi Reza, Abdollahi Hamid, Dashti Elham

Abstract:

Cloud computing poses a diversity of challenges in data mining operation arising out of the dynamic structure of data distribution as against the use of typical database scenarios in conventional architecture. Due to immense number of users seeking data on daily basis, there is a serious security concerns to cloud providers as well as data providers who put their data on the cloud computing environment. Big data analytics use compute intensive data mining algorithms (Hidden markov, MapReduce parallel programming, Mahot Project, Hadoop distributed file system, K-Means and KMediod, Apriori) that require efficient high performance processors to produce timely results. Data mining algorithms to solve or optimize the model parameters. The challenges that operation has to encounter is the successful transactions to be established with the existing virtual machine environment and the databases to be kept under the control. Several factors have led to the distributed data mining from normal or centralized mining. The approach is as a SaaS which uses multi-agent systems for implementing the different tasks of system. There are still some problems of data mining based on cloud computing, including design and selection of data mining algorithms.

Keywords: cloud computing, data mining, computing models, cloud services

Procedia PDF Downloads 476
25686 Cross-border Data Transfers to and from South Africa

Authors: Amy Gooden, Meshandren Naidoo

Abstract:

Genetic research and transfers of big data are not confined to a particular jurisdiction, but there is a lack of clarity regarding the legal requirements for importing and exporting such data. Using direct-to-consumer genetic testing (DTC-GT) as an example, this research assesses the status of data sharing into and out of South Africa (SA). While SA laws cover the sending of genetic data out of SA, prohibiting such transfer unless a legal ground exists, the position where genetic data comes into the country depends on the laws of the country from where it is sent – making the legal position less clear.

Keywords: cross-border, data, genetic testing, law, regulation, research, sharing, South Africa

Procedia PDF Downloads 122
25685 The Study of Security Techniques on Information System for Decision Making

Authors: Tejinder Singh

Abstract:

Information system is the flow of data from different levels to different directions for decision making and data operations in information system (IS). Data can be violated by different manner like manual or technical errors, data tampering or loss of integrity. Security system called firewall of IS is effected by such type of violations. The flow of data among various levels of Information System is done by networking system. The flow of data on network is in form of packets or frames. To protect these packets from unauthorized access, virus attacks, and to maintain the integrity level, network security is an important factor. To protect the data to get pirated, various security techniques are used. This paper represents the various security techniques and signifies different harmful attacks with the help of detailed data analysis. This paper will be beneficial for the organizations to make the system more secure, effective, and beneficial for future decisions making.

Keywords: information systems, data integrity, TCP/IP network, vulnerability, decision, data

Procedia PDF Downloads 302
25684 Estimating Annual Average Daily Traffic Using Statewide Traffic Data Programs: Missing Data Analysis

Authors: Muhammad Faizan Rehman Qureshi, Ahmed Al-Kaisy

Abstract:

State highway agencies usually operate system-wide traffic monitoring programs for collecting traffic data. Of particular importance is the traffic volume data that is used in the estimation of the Annual Average Daily Traffic (AADT). State Departments of Transportation (DOTs) measure the AADT at locations of permanent ATR and WIM stations and estimate the parameter at all other locations using short-term counts. Traffic counters at the permanent ATR and WIM stations frequently malfunction and result in a specific period(s) of inaccurate or missing data. The study used ATR and WIM data from the state of Montana to examine the effect of missing data on the accuracy of AADT estimation. Two random sampling techniques were used, and three scenarios of data availability were considered in the investigation: one, two and three weeks of data within each month. The study results showed that the increase in AADT approximation was not proportional to the increase in the amount of missing data. Given the extreme scenario of missing data (all permanent stations missing data simultaneously) and the relatively lower effect on AADT approximation, it can be concluded that the current practice in treating missing data does not involve a considerable compromise in the accuracy of AADT estimation.

Keywords: traffic monitoring program, AADT, missing data, adjustment factors, traffic data collection, permanent stations

Procedia PDF Downloads 0
25683 Data Integration with Geographic Information System Tools for Rural Environmental Monitoring

Authors: Tamas Jancso, Andrea Podor, Eva Nagyne Hajnal, Peter Udvardy, Gabor Nagy, Attila Varga, Meng Qingyan

Abstract:

The paper deals with the conditions and circumstances of integration of remotely sensed data for rural environmental monitoring purposes. The main task is to make decisions during the integration process when we have data sources with different resolution, location, spectral channels, and dimension. In order to have exact knowledge about the integration and data fusion possibilities, it is necessary to know the properties (metadata) that characterize the data. The paper explains the joining of these data sources using their attribute data through a sample project. The resulted product will be used for rural environmental analysis.

Keywords: remote sensing, GIS, metadata, integration, environmental analysis

Procedia PDF Downloads 113
25682 The Antimicrobial Activity of Marjoram Essential Oil Against Some Antibiotic Resistant Microbes Isolated from Hospitals

Authors: R. A. Abdel Rahman, A. E. Abdel Wahab, E. A. Goghneimy, H. F. Mohamed, E. M. Salama

Abstract:

Infectious diseases are a major cause of death worldwide. The treatment of infections continues to be problematic in modern time because of the severe side effects of some drugs and the growing resistance to antimicrobial agents. Hence, the search for newer, safer and more potent antimicrobials is a pressing need. Herbal medicines have received much attention as a source of new antibacterial drugs since they are considered time-tested and comparatively safe both for human use and the environment. In the present study, the antimicrobial activity of marjoram (Origanum majorana L.) essential oil on some gram positive and gram negative reference bacteria, as well as some hospital resistant microbes, was tested. Marjoram oil was extracted and the oil chemical constituents were identified using GC/MS analysis. Staphylococcus aureas ATCC 6923, Pseudomonus auregonosa ATCC 9027, Bacillus subtilis ATCC 6633, E. coli ATCC 8736 and two hospital resistant microbes isolates 16 and 21 were used. The two isolates were identified by biochemical tests and 16s rRNA as proteus spp. and Enterococcus facielus. The effect of different concentrations of essential oils on bacterial growth was tested using agar disk diffusion assay method to determine the minimum inhibitory concentrations and using micro dilution method to determine the minimum bactericidal concentrations. Marjoram oil was found to be effective against both reference and hospital resistance strains. Hospital strains were more resistant to marjoram oil than reference strains. P. auregonosa growth was completely inhibited at a low concentration of oil (4µl/ml). The other reference strains showed sensitivity to marjoram oil at concentrations ranged from 5 to 7µl/ml. The two hospital strains showed sensitivity at media containing 10 and 15µl/ml oil. The major components of oil were terpineol, cis-beta (23.5%), 1,6 – octadien –3-ol,3,7-dimethyl, 2 aminobenzoate (10.9%), alpha terpieol (8.6%) and linalool (6.3%). Scanning electron microscope (SEM) and transmission electron microscope (TEM) analysis were used to determine the difference between treated and untreated hospital strains. SEM results showed that treated cells were smaller in size than control cells. TEM data showed that cell lysis has occurred to treated cells. Treated cells have ruptured cell wall and appeared empty of cytoplasm compared to control cells which shown to be intact with normal volume of cytoplasm. The results indicated that marjoram oil has a positive antimicrobial effect on hospital resistance microbes. Natural crude extracts can be perfect resources for new antimicrobial drugs.

Keywords: antimicrobial activity, essential oil, hospital resistance microbes, marjoram

Procedia PDF Downloads 442
25681 Analysis of Genomics Big Data in Cloud Computing Using Fuzzy Logic

Authors: Mohammad Vahed, Ana Sadeghitohidi, Majid Vahed, Hiroki Takahashi

Abstract:

In the genomics field, the huge amounts of data have produced by the next-generation sequencers (NGS). Data volumes are very rapidly growing, as it is postulated that more than one billion bases will be produced per year in 2020. The growth rate of produced data is much faster than Moore's law in computer technology. This makes it more difficult to deal with genomics data, such as storing data, searching information, and finding the hidden information. It is required to develop the analysis platform for genomics big data. Cloud computing newly developed enables us to deal with big data more efficiently. Hadoop is one of the frameworks distributed computing and relies upon the core of a Big Data as a Service (BDaaS). Although many services have adopted this technology, e.g. amazon, there are a few applications in the biology field. Here, we propose a new algorithm to more efficiently deal with the genomics big data, e.g. sequencing data. Our algorithm consists of two parts: First is that BDaaS is applied for handling the data more efficiently. Second is that the hybrid method of MapReduce and Fuzzy logic is applied for data processing. This step can be parallelized in implementation. Our algorithm has great potential in computational analysis of genomics big data, e.g. de novo genome assembly and sequence similarity search. We will discuss our algorithm and its feasibility.

Keywords: big data, fuzzy logic, MapReduce, Hadoop, cloud computing

Procedia PDF Downloads 296
25680 Forthcoming Big Data on Smart Buildings and Cities: An Experimental Study on Correlations among Urban Data

Authors: Yu-Mi Song, Sung-Ah Kim, Dongyoun Shin

Abstract:

Cities are complex systems of diverse and inter-tangled activities. These activities and their complex interrelationships create diverse urban phenomena. And such urban phenomena have considerable influences on the lives of citizens. This research aimed to develop a method to reveal the causes and effects among diverse urban elements in order to enable better understanding of urban activities and, therefrom, to make better urban planning strategies. Specifically, this study was conducted to solve a data-recommendation problem found on a Korean public data homepage. First, a correlation analysis was conducted to find the correlations among random urban data. Then, based on the results of that correlation analysis, the weighted data network of each urban data was provided to people. It is expected that the weights of urban data thereby obtained will provide us with insights into cities and show us how diverse urban activities influence each other and induce feedback.

Keywords: big data, machine learning, ontology model, urban data model

Procedia PDF Downloads 412
25679 Application of the Shallow Seismic Refraction Technique to Characterize the Foundation Rocks at the Proposed Tushka New City Site, South Egypt

Authors: Abdelnasser Mohamed, R. Fat-Helbary, H. El Khashab, K. EL Faragawy

Abstract:

Tushka New City is one of the proposed new cities in South Egypt. It is located in the eastern part of the western Desert of Egypt between latitude 22.878º and 22.909º N and longitude 31.525º and 31.635º E, about 60 kilometers far from Abu Simble City. The main target of the present study is the investigation of the shallow subsurface structure conditions and the dynamic characteristics of subsurface rocks using the shallow seismic refraction technique. Forty seismic profiles were conducted to calculate the P- and S-waves velocity at the study area. P- and SH-waves velocities can be used to obtain the geotechnical parameters and also SH-wave can be used to study the vibration characteristics of the near surface layers, which are important for earthquakes resistant structure design. The output results of the current study indicated that the P-waves velocity ranged from 450 to 1800 m/sec and from 1550 to 3000 m/sec for the surface and bedrock layer respectively. The SH-waves velocity ranged from 300 to 1100 m/sec and from 1000 to 1800 m/sec for the surface and bedrock layer respectively. The thickness of the surface layer and the depth to the bedrock layer were determined along each profile. The bulk density ρ of soil layers that used in this study was calculated for all layers at each profile in the study area. In conclusion, the area is mainly composed of compacted sandstone with high wave velocities, which is considered as a good foundation rock. The south western part of the study area has minimum values of the computed P- and SH-waves velocities, minimum values of the bulk density and the maximum value of the mean thickness of the surface layer.

Keywords: seismic refraction, Tushak new city, P-waves, SH-waves

Procedia PDF Downloads 379
25678 Data-driven Decision-Making in Digital Entrepreneurship

Authors: Abeba Nigussie Turi, Xiangming Samuel Li

Abstract:

Data-driven business models are more typical for established businesses than early-stage startups that strive to penetrate a market. This paper provided an extensive discussion on the principles of data analytics for early-stage digital entrepreneurial businesses. Here, we developed data-driven decision-making (DDDM) framework that applies to startups prone to multifaceted barriers in the form of poor data access, technical and financial constraints, to state some. The startup DDDM framework proposed in this paper is novel in its form encompassing startup data analytics enablers and metrics aligning with startups' business models ranging from customer-centric product development to servitization which is the future of modern digital entrepreneurship.

Keywords: startup data analytics, data-driven decision-making, data acquisition, data generation, digital entrepreneurship

Procedia PDF Downloads 320
25677 Key Frame Based Video Summarization via Dependency Optimization

Authors: Janya Sainui

Abstract:

As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.

Keywords: video summarization, key frame extraction, dependency measure, quadratic mutual information

Procedia PDF Downloads 262
25676 Prevalance and Factors Associated with Domestic Violence among Preganant Women in Southwest Ethiopia

Authors: Bediru Abamecha

Abstract:

Background: Domestic violence is a global problem that occurs regardless of culture, ethnicity or socio-economic class. It is known to be responsible for numerous hospital visits undertaken by women. Violence on pregnant women is a health and social problem that poses particular risks to the woman and her unborn child. Objective: The Objective of this study will be to assess prevalence of domestic violence and its correalates among pregnant women in Manna Woreda of Jimma Zone. Methods: Simple Random Sampling technique will be used to select 12 kebeles (48% of the study area) and Systematic Sampling will be used to reach to the house hold in selected kebeles in manna woreda of Jimma zone, south west Ethiopia from february 15-25, 2011. An in-depth interview will be conducted on Women affairs, police office and Nurses working and minimum of 4FGD with 6-8 members on pregnant women and selected male from the community. SPSS version 16.0 will be used to enter, clean and analyze the data. Descriptive statistics such as mean or median for continuous variables and percent for categorical variables will be made. Bivariate analysis will be used to check the association between independent variables and domestic violence. Variables found to have association with domestic violence will be entered to multiple logistic regressions for controlling the possible effect of confounders and finally the variables which had significance association will be identified on basis of OR, with 95% CI. All statistical significance will be considered at p<0.05. The qualitative data will be summarized manually and thematic analysis will be performed and finally both will be triangulated.

Keywords: ante natal care, ethiopian demographic and health survey, domestic violence, statistical package for social science

Procedia PDF Downloads 510