Search results for: data source.
7832 Learners’ Perceptions of Tertiary Level Teachers’ Code Switching: A Vietnamese Perspective
Authors: Hoa Pham
Abstract:
The literature on language teaching and second language acquisition has been largely driven by monolingual ideology with a common assumption that a second language (L2) is best taught and learned in the L2 only. The current study challenges this assumption by reporting learners' positive perceptions of tertiary level teachers' code switching practices in Vietnam. The findings of this study contribute to our understanding of code switching practices in language classrooms from a learners' perspective. Data were collected from student participants who were working towards a Bachelor degree in English within the English for Business Communication stream through the use of focus group interviews. The literature has documented that this method of interviewing has a number of distinct advantages over individual student interviews. For instance, group interactions generated by focus groups create a more natural environment than that of an individual interview because they include a range of communicative processes in which each individual may influence or be influenced by others - as they are in their real life. The process of interaction provides the opportunity to obtain the meanings and answers to a problem that are "socially constructed rather than individually created" leading to the capture of real-life data. The distinct feature of group interaction offered by this technique makes it a powerful means of obtaining deeper and richer data than those from individual interviews. The data generated through this study were analysed using a constant comparative approach. Overall, the students expressed positive views of this practice indicating that it is a useful teaching strategy. Teacher code switching was seen as a learning resource and a source supporting language output. This practice was perceived to promote student comprehension and to aid the learning of content and target language knowledge. This practice was also believed to scaffold the students' language production in different contexts. However, the students indicated their preference for teacher code switching to be constrained, as extensive use was believed to negatively impact on their L2 learning and trigger cognitive reliance on the L1 for L2 learning. The students also perceived that when the L1 was used to a great extent, their ability to develop as autonomous learners was negatively impacted. This study found that teacher code switching was supported in certain contexts by learners, thus suggesting that there is a need for the widespread assumption about the monolingual teaching approach to be re-considered.Keywords: Code switching, L1 use, L2 teaching, Learners’ perception.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25027831 XML Data Management in Compressed Relational Database
Authors: Hongzhi Wang, Jianzhong Li, Hong Gao
Abstract:
XML is an important standard of data exchange and representation. As a mature database system, using relational database to support XML data may bring some advantages. But storing XML in relational database has obvious redundancy that wastes disk space, bandwidth and disk I/O when querying XML data. For the efficiency of storage and query XML, it is necessary to use compressed XML data in relational database. In this paper, a compressed relational database technology supporting XML data is presented. Original relational storage structure is adaptive to XPath query process. The compression method keeps this feature. Besides traditional relational database techniques, additional query process technologies on compressed relations and for special structure for XML are presented. In this paper, technologies for XQuery process in compressed relational database are presented..Keywords: XML, compression, query processing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18067830 Deposition Rate and Energy Enhancements of TiN Thin-Film in a Magnetized Sheet Plasma Source
Authors: Hamdi Muhyuddin D. Barra, Henry J. Ramos
Abstract:
Titanium nitride (TiN) has been synthesized using the sheet plasma negative ion source (SPNIS). The parameters used for its effective synthesis has been determined from previous experiments and studies. In this study, further enhancement of the deposition rate of TiN synthesis and advancement of the SPNIS operation is presented. This is primarily achieved by the addition of Sm-Co permanent magnets and a modification of the configuration in the TiN deposition process. The magnetic enhancement is aimed at optimizing the sputtering rate and the sputtering yield of the process. The Sm-Co permanent magnets are placed below the Ti target for better sputtering by argon. The Ti target is biased from –250V to – 350V and is sputtered by Ar plasma produced at discharge current of 2.5–4A and discharge potential of 60–90V. Steel substrates of dimensions 20x20x0.5mm3 were prepared with N2:Ar volumetric ratios of 1:3, 1:5 and 1:10. Ocular inspection of samples exhibit bright gold color associated with TiN. XRD characterization confirmed the effective TiN synthesis as all samples exhibit the (200) and (311) peaks of TiN and the non-stoichiometric Ti2N (220) facet. Cross-sectional SEM results showed increase in the TiN deposition rate of up to 0.35μm/min. This doubles what was previously obtained [1]. Scanning electron micrograph results give a comparative morphological picture of the samples. Vickers hardness results gave the largest hardness value of 21.094GPa.Keywords: Chemical vapor deposition, Magnetized sheetplasma, Thin-film synthesis, Titanium nitride.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16657829 A System for Analyzing and Eliciting Public Grievances Using Cache Enabled Big Data
Authors: P. Kaladevi, N. Giridharan
Abstract:
The system for analyzing and eliciting public grievances serves its main purpose to receive and process all sorts of complaints from the public and respond to users. Due to the more number of complaint data becomes big data which is difficult to store and process. The proposed system uses HDFS to store the big data and uses MapReduce to process the big data. The concept of cache was applied in the system to provide immediate response and timely action using big data analytics. Cache enabled big data increases the response time of the system. The unstructured data provided by the users are efficiently handled through map reduce algorithm. The processing of complaints takes place in the order of the hierarchy of the authority. The drawbacks of the traditional database system used in the existing system are set forth by our system by using Cache enabled Hadoop Distributed File System. MapReduce framework codes have the possible to leak the sensitive data through computation process. We propose a system that add noise to the output of the reduce phase to avoid signaling the presence of sensitive data. If the complaints are not processed in the ample time, then automatically it is forwarded to the higher authority. Hence it ensures assurance in processing. A copy of the filed complaint is sent as a digitally signed PDF document to the user mail id which serves as a proof. The system report serves to be an essential data while making important decisions based on legislation.Keywords: Big Data, Hadoop, HDFS, Caching, MapReduce, web personalization, e-governance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15927828 Description of Reported Foodborne Diseases in Selected Communities within the Greater Accra Region-Ghana: Epidemiological Review of Surveillance Data
Authors: Benjamin Osei-Tutu, Henrietta Awewole Kolson
Abstract:
Background: Acute gastroenteritis is one of the frequently reported Out-Patient Department (OPD) cases. However, the causative pathogens of these cases are rarely identified at the OPD due to delay in laboratory results or failure to obtain specimens before antibiotics is administered. Method: A retrospective review of surveillance data from the Adentan Municipality, Accra, Ghana that were recorded in the National foodborne disease surveillance system of Ghana, was conducted with the main aim of describing the epidemiology and food practice of cases reported from the Adentan Municipality. The study involved a retrospective review of surveillance data kept on patients who visited health facilities that are involved in foodborne disease surveillance in Ghana, from January 2015 to December 2016. Results: A total of 375 cases were reviewed and these were classified as viral hepatitis (hepatitis A and E), cholera (Vibrio cholerae), dysentery (Shigella sp.), typhoid fever (Salmonella sp.) or gastroenteritis. Cases recorded were all suspected case and the average cases recorded per week was 3. Typhoid fever and dysentery were the two main clinically diagnosed foodborne illnesses. The highest number of cases were observed during the late dry season (Feb to April), which marks the end of the dry season and the beginning of the rainy season. Relatively high number of cases was also observed during the late wet seasons (Jul to Oct) when the rainfall is the heaviest. Home-made food and street vended food were the major sources of suspected etiological food, recording 49.01% and 34.87% of the cases respectively. Conclusion: Majority of cases recorded were classified as gastroenteritis due to the absence of laboratory confirmation. Few cases were classified as typhoid fever and dysentery based on clinical symptoms presented. Patients reporting with foodborne diseases were found to consume home meal and street vended foods as their predominant source of food.
Keywords: Accra, etiologic food, food poisoning, gastroenteritis, illness, surveillance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7237827 Improved K-Modes for Categorical Clustering Using Weighted Dissimilarity Measure
Authors: S.Aranganayagi, K.Thangavel
Abstract:
K-Modes is an extension of K-Means clustering algorithm, developed to cluster the categorical data, where the mean is replaced by the mode. The similarity measure proposed by Huang is the simple matching or mismatching measure. Weight of attribute values contribute much in clustering; thus in this paper we propose a new weighted dissimilarity measure for K-Modes, based on the ratio of frequency of attribute values in the cluster and in the data set. The new weighted measure is experimented with the data sets obtained from the UCI data repository. The results are compared with K-Modes and K-representative, which show that the new measure generates clusters with high purity.
Keywords: Clustering, categorical data, K-Modes, weighted dissimilarity measure
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36897826 Mobile Phone as a Tool for Data Collection in Field Research
Authors: Sandro Mourão, Karla Okada
Abstract:
The necessity of accurate and timely field data is shared among organizations engaged in fundamentally different activities, public services or commercial operations. Basically, there are three major components in the process of the qualitative research: data collection, interpretation and organization of data, and analytic process. Representative technological advancements in terms of innovation have been made in mobile devices (mobile phone, PDA-s, tablets, laptops, etc). Resources that can be potentially applied on the data collection activity for field researches in order to improve this process. This paper presents and discuss the main features of a mobile phone based solution for field data collection, composed of basically three modules: a survey editor, a server web application and a client mobile application. The data gathering process begins with the survey creation module, which enables the production of tailored questionnaires. The field workforce receives the questionnaire(s) on their mobile phones to collect the interviews responses and sending them back to a server for immediate analysis.Keywords: Data Gathering, Field Research, Mobile Phone, Survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20587825 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis
Authors: N. R. N. Idris, S. Baharom
Abstract:
A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates.On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.
Keywords: Aggregate data, combined-level data, Individual patient data, meta analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17407824 Performance Comparison of AODV and Soft AODV Routing Protocol
Authors: Abhishek, Seema Devi, Jyoti Ohri
Abstract:
A mobile ad hoc network (MANET) represents a system of wireless mobile nodes that can self-organize freely and dynamically into arbitrary and temporary network topology. Unlike a wired network, wireless network interface has limited transmission range. Routing is the task of forwarding data packets from source to a given destination. Ad-hoc On Demand Distance Vector (AODV) routing protocol creates a path for a destination only when it required. This paper describes the implementation of AODV routing protocol using MATLAB-based Truetime simulator. In MANET's node movements are not fixed while they are random in nature. Hence intelligent techniques i.e. fuzzy and ANFIS are used to optimize the transmission range. In this paper, we compared the transmission range of AODV, fuzzy AODV and ANFIS AODV. For soft computing AODV, we have taken transmitted power and received threshold as input and transmission range as output. ANFIS gives better results as compared to fuzzy AODV.Keywords: ANFIS, AODV, fuzzy, MANET, reactive routing protocol, routing protocol, Truetime.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13297823 Performance Analysis of MIMO Based Multi-User Cooperation Diversity Over Various Fading Channels
Authors: Zuhaib Ashfaq Khan, Imran Khan, Nandana Rajatheva
Abstract:
In this paper, hybrid FDMA-TDMA access technique in a cooperative distributive fashion introducing and implementing a modified protocol introduced in [1] is analyzed termed as Power and Cooperation Diversity Gain Protocol (PCDGP). A wireless network consists of two users terminal , two relays and a destination terminal equipped with two antennas. The relays are operating in amplify-and-forward (AF) mode with a fixed gain. Two operating modes: cooperation-gain mode and powergain mode are exploited from source terminals to relays, as it is working in a best channel selection scheme. Vertical BLAST (Bell Laboratories Layered Space Time) or V-BLAST with minimum mean square error (MMSE) nulling is used at the relays to perfectly detect the joint signals from multiple source terminals. The performance is analyzed using binary phase shift keying (BPSK) modulation scheme and investigated over independent and identical (i.i.d) Rayleigh, Ricean-K and Nakagami-m fading environments. Subsequently, simulation results show that the proposed scheme can provide better signal quality of uplink users in a cooperative communication system using hybrid FDMATDMA technique.
Keywords: Cooperation Diversity, Best Channel Selectionscheme, MIMO relay networks, V-BLAST, QRdecomposition, and MMSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20027822 Dental Ethics versus Malpractice, as Phenomenon with a Growing Trend
Authors: Saimir Heta, Kers Kapaj, Rialda Xhizdari, Ilma Robo
Abstract:
Dealing with emerging cases of dental malpractice with justifications that stem from the clear rules of dental ethics is a phenomenon with an increasing trend in today's dental practice. Dentists should clearly understand how far the limit of malpractice goes, with or without minimal or major consequences, for the affected patient, which can be justified as a complication of dental treatment, in support of the rules of dental ethics in the dental office. Indeed, malpractice can occur in cases of lack of professionalism, but it can also come as a consequence of anatomical and physiological limitations in the implementation of the dental protocols, predetermined and indicated by the patient in the paragraph of the treatment plan in his personal card. Let this article serve as a short communication between readers and interested parties about the problems that dental malpractice can bring to the community. Malpractice should not be seen only as a professional wrong approach, but also as a phenomenon that can occur during dental practice. The aim of this article is presentation of the latest data published in the literature about malpractice. The combination of keywords is done in such a way with the aim to give the necessary space for collecting the right information in the networks of publications about this field, always first from the point of view of the dentist and not from that of the lawyer or jurist. From the findings included in this article, it was noticed that the diversity of approaches towards the phenomenon depends on the different countries based on the legal basis that these countries have. There is a lack of or a small number of articles that touch on this topic, and these articles are presented with a limited amount of data on the same topic. Dental malpractice should not be hidden under the guise of various dental complications that we justify with the strict rules of ethics for patients treated in the dental chair. The individual experience of dental malpractice must be published with the aim of serving as a source of experience for future generations of dentists.
Keywords: Dental ethics, malpractice, professional protocol, random deviation, dental tourism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567821 DIVAD: A Dynamic and Interactive Visual Analytical Dashboard for Exploring and Analyzing Transport Data
Authors: Tin Seong Kam, Ketan Barshikar, Shaun Tan
Abstract:
The advances in location-based data collection technologies such as GPS, RFID etc. and the rapid reduction of their costs provide us with a huge and continuously increasing amount of data about movement of vehicles, people and goods in an urban area. This explosive growth of geospatially-referenced data has far outpaced the planner-s ability to utilize and transform the data into insightful information thus creating an adverse impact on the return on the investment made to collect and manage this data. Addressing this pressing need, we designed and developed DIVAD, a dynamic and interactive visual analytics dashboard to allow city planners to explore and analyze city-s transportation data to gain valuable insights about city-s traffic flow and transportation requirements. We demonstrate the potential of DIVAD through the use of interactive choropleth and hexagon binning maps to explore and analyze large taxi-transportation data of Singapore for different geographic and time zones.Keywords: Geographic Information System (GIS), MovementData, GeoVisual Analytics, Urban Planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23897820 Performance and Emission Characteristics of a DI Diesel Engine Fuelled with Cashew Nut Shell Liquid (CNSL)-Diesel Blends
Authors: Velmurugan. A, Loganathan. M
Abstract:
The increased number of automobiles in recent years has resulted in great demand for fossil fuel. This has led to the development of automobile by using alternative fuels which include gaseous fuels, biofuels and vegetables oils as fuel. Energy from biomass and more specific bio-diesel is one of the opportunities that could cover the future demand of fossil fuel shortage. Biomass in the form of cashew nut shell represents a new energy source and abundant source of energy in India. The bio-fuel is derived from cashew nut shell oil and its blend with diesel are promising alternative fuel for diesel engine. In this work the pyrolysis Cashew Nut Shell Liquid (CNSL)-Diesel Blends (CDB) was used to run the Direct Injection (DI) diesel engine. The experiments were conducted with various blends of CNSL and Diesel namely B20, B40, B60, B80 and B100. The results are compared with neat diesel operation. The brake thermal efficiency was decreased for blends of CNSL and Diesel except the lower blends of B20. The brake thermal efficiency of B20 is nearly closer to that of diesel fuel. Also the emission level of the all CNSL and Diesel blends was increased compared to neat diesel. The higher viscosity and lower volatility of CNSL leads to poor mixture formation and hence lower brake thermal efficiency and higher emission levels. The higher emission level can be reduced by adding suitable additives and oxygenates with CNSL and Diesel blends.Keywords: Bio-oil, Biodiesel, Cardanol, Cashew nut shell liquid (CNSL)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39407819 Gene Expression Data Classification Using Discriminatively Regularized Sparse Subspace Learning
Authors: Chunming Xu
Abstract:
Sparse representation which can represent high dimensional data effectively has been successfully used in computer vision and pattern recognition problems. However, it doesn-t consider the label information of data samples. To overcome this limitation, we develop a novel dimensionality reduction algorithm namely dscriminatively regularized sparse subspace learning(DR-SSL) in this paper. The proposed DR-SSL algorithm can not only make use of the sparse representation to model the data, but also can effective employ the label information to guide the procedure of dimensionality reduction. In addition,the presented algorithm can effectively deal with the out-of-sample problem.The experiments on gene-expression data sets show that the proposed algorithm is an effective tool for dimensionality reduction and gene-expression data classification.Keywords: sparse representation, dimensionality reduction, labelinformation, sparse subspace learning, gene-expression data classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14477818 Application Methodology for the Generation of 3D Thermal Models Using UAV Photogrammety and Dual Sensors for Mining/Industrial Facilities Inspection
Authors: Javier Sedano-Cibrián, Julio Manuel de Luis-Ruiz, Rubén Pérez-Álvarez, Raúl Pereda-García, Beatriz Malagón-Picón
Abstract:
Structural inspection activities are necessary to ensure the correct functioning of infrastructures. UAV techniques have become more popular than traditional techniques. Specifically, UAV Photogrammetry allows time and cost savings. The development of this technology has permitted the use of low-cost thermal sensors in UAVs. The representation of 3D thermal models with this type of equipment is in continuous evolution. The direct processing of thermal images usually leads to errors and inaccurate results. In this paper, a methodology is proposed for the generation of 3D thermal models using dual sensors, which involves the application of RGB and thermal images in parallel. Hence, the RGB images are used as the basis for the generation of the model geometry, and the thermal images are the source of the surface temperature information that is projected onto the model. Mining/industrial facilities representations that are obtained can be used for inspection activities.
Keywords: Aerial thermography, data processing, drone, low-cost, point cloud.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3417817 Determining Cluster Boundaries Using Particle Swarm Optimization
Authors: Anurag Sharma, Christian W. Omlin
Abstract:
Self-organizing map (SOM) is a well known data reduction technique used in data mining. Data visualization can reveal structure in data sets that is otherwise hard to detect from raw data alone. However, interpretation through visual inspection is prone to errors and can be very tedious. There are several techniques for the automatic detection of clusters of code vectors found by SOMs, but they generally do not take into account the distribution of code vectors; this may lead to unsatisfactory clustering and poor definition of cluster boundaries, particularly where the density of data points is low. In this paper, we propose the use of a generic particle swarm optimization (PSO) algorithm for finding cluster boundaries directly from the code vectors obtained from SOMs. The application of our method to unlabeled call data for a mobile phone operator demonstrates its feasibility. PSO algorithm utilizes U-matrix of SOMs to determine cluster boundaries; the results of this novel automatic method correspond well to boundary detection through visual inspection of code vectors and k-means algorithm.
Keywords: Particle swarm optimization, self-organizing maps, clustering, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17187816 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm
Authors: Ameur Abdelkader, Abed Bouarfa Hafida
Abstract:
Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.
Keywords: Predictive analysis, big data, predictive analysis algorithms. CART algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10757815 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach
Authors: Sarisa Pinkham, Kanyarat Bussaban
Abstract:
The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.
Keywords: Daily rainfall, Image processing, Approximation, Pixel value data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17587814 Steps towards the Development of National Health Data Standards in Developing Countries: An Exploratory Qualitative Study in Saudi Arabia
Authors: Abdullah I. Alkraiji, Thomas W. Jackson, Ian R. Murray
Abstract:
The proliferation of health data standards today is somewhat overlapping and conflicting, resulting in market confusion and leading to increasing proprietary interests. The government role and support in standardization for health data are thought to be crucial in order to establish credible standards for the next decade, to maximize interoperability across the health sector, and to decrease the risks associated with the implementation of non-standard systems. The normative literature missed out the exploration of the different steps required to be undertaken by the government towards the development of national health data standards. Based on the lessons learned from a qualitative study investigating the different issues to the adoption of health data standards in the major tertiary hospitals in Saudi Arabia and the opinions and feedback from different experts in the areas of data exchange and standards and medical informatics in Saudi Arabia and UK, a list of steps required towards the development of national health data standards was constructed. Main steps are the existence of: a national formal reference for health data standards, an agreed national strategic direction for medical data exchange, a national medical information management plan and a national accreditation body, and more important is the change management at the national and organizational level. The outcome of this study can be used by academics and practitioners to develop the planning of health data standards, and in particular those in developing countries.
Keywords: Interoperability, Case Study, Health Data Standards, Medical Data Exchange, Saudi Arabia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20027813 Test Data Compression Using a Hybrid of Bitmask Dictionary and 2n Pattern Runlength Coding Methods
Authors: C. Kalamani, K. Paramasivam
Abstract:
In VLSI, testing plays an important role. Major problem in testing are test data volume and test power. The important solution to reduce test data volume and test time is test data compression. The Proposed technique combines the bit maskdictionary and 2n pattern run length-coding method and provides a substantial improvement in the compression efficiency without introducing any additional decompression penalty. This method has been implemented using Mat lab and HDL Language to reduce test data volume and memory requirements. This method is applied on various benchmark test sets and compared the results with other existing methods. The proposed technique can achieve a compression ratio up to 86%.Keywords: Bit Mask dictionary, 2n pattern run length code, system-on-chip, SOC, test data compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19217812 A Hybrid Data Mining Method for the Medical Classification of Chest Pain
Authors: Sung Ho Ha, Seong Hyeon Joo
Abstract:
Data mining techniques have been used in medical research for many years and have been known to be effective. In order to solve such problems as long-waiting time, congestion, and delayed patient care, faced by emergency departments, this study concentrates on building a hybrid methodology, combining data mining techniques such as association rules and classification trees. The methodology is applied to real-world emergency data collected from a hospital and is evaluated by comparing with other techniques. The methodology is expected to help physicians to make a faster and more accurate classification of chest pain diseases.Keywords: Data mining, medical decisions, medical domainknowledge, chest pain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22207811 Barriers to the Use of Factoring Accounts Receivables: The Ghanaian Contractor’s Perception
Authors: E. Kissi, V. K. Acheamfour, J. J. Gyimah, T. Adjei-Kumi
Abstract:
Factoring accounts receivable is widely accepted as an alternative financing source and utilized in almost every industry that sells business-to-business or business-to-government. However, its patronage in the construction industry is very limited as some barriers hinder its application in the construction industry. This study aims at assessing the barriers to the use of factoring accounts receivables in the Ghanaian construction industry. The study adopted the sequential exploratory research method where structured and unstructured questionnaires were conveniently distributed to D1K1 and D2K2 construction firms in Ghana. Using the one-sample t-test and Kendall’s Coefficient of concordance data were analyzed. The most severe challenge concluded is the high cost of factoring patronage. Other critical challenges identified were low knowledge on factoring processes, inadequate access to information on factoring, and high risks involved in factoring. Hence, it is recommended that contractors should be made aware of the prospects of factoring of accounts receivables in the construction industry. This study serves as basis for further rigorous research into factoring of accounts receivables in the industry.
Keywords: Barriers, contractors, factoring accounts receivables, Ghanaian, perception.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5487810 Knowledge Discovery and Data Mining Techniques in Textile Industry
Authors: Filiz Ersoz, Taner Ersoz, Erkin Guler
Abstract:
This paper addresses the issues and technique for textile industry using data mining techniques. Data mining has been applied to the stitching of garments products that were obtained from a textile company. Data mining techniques were applied to the data obtained from the CHAID algorithm, CART algorithm, Regression Analysis and, Artificial Neural Networks. Classification technique based analyses were used while data mining and decision model about the production per person and variables affecting about production were found by this method. In the study, the results show that as the daily working time increases, the production per person also decreases. In addition, the relationship between total daily working and production per person shows a negative result and the production per person show the highest and negative relationship.Keywords: Data mining, textile production, decision trees, classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15387809 Application and Limitation of Parallel Modelingin Multidimensional Sequential Pattern
Authors: Mahdi Esmaeili, Mansour Tarafdar
Abstract:
The goal of data mining algorithms is to discover useful information embedded in large databases. One of the most important data mining problems is discovery of frequently occurring patterns in sequential data. In a multidimensional sequence each event depends on more than one dimension. The search space is quite large and the serial algorithms are not scalable for very large datasets. To address this, it is necessary to study scalable parallel implementations of sequence mining algorithms. In this paper, we present a model for multidimensional sequence and describe a parallel algorithm based on data parallelism. Simulation experiments show good load balancing and scalable and acceptable speedup over different processors and problem sizes and demonstrate that our approach can works efficiently in a real parallel computing environment.Keywords: Sequential Patterns, Data Mining, ParallelAlgorithm, Multidimensional Sequence Data
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14767808 Multi-Modal Film Boiling Simulations on Adaptive Octree Grids
Authors: M. Wasy Akhtar
Abstract:
Multi-modal film boiling simulations are carried out on adaptive octree grids. The liquid-vapor interface is captured using the volume-of-fluid framework adjusted to account for exchanges of mass, momentum, and energy across the interface. Surface tension effects are included using a volumetric source term in the momentum equations. The phase change calculations are conducted based on the exact location and orientation of the interface; however, the source terms are calculated using the mixture variables to be consistent with the one field formulation used to represent the entire fluid domain. The numerical model on octree representation of the computational grid is first verified using test cases including advection tests in severely deforming velocity fields, gravity-based instabilities and bubble growth in uniformly superheated liquid under zero gravity. The model is then used to simulate both single and multi-modal film boiling simulations. The octree grid is dynamically adapted in order to maintain the highest grid resolution on the instability fronts using markers of interface location, volume fraction, and thermal gradients. The method thus provides an efficient platform to simulate fluid instabilities with or without phase change in the presence of body forces like gravity or shear layer instabilities.
Keywords: Boiling flows, dynamic octree grids, heat transfer, interface capturing, phase change.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7437807 Isolation of a Bacterial Community with High Removal Efficiencies of the Insecticide Bendiocarb
Authors: Eusebio A. Jiménez-Arévalo, Deifilia Ahuatzi-Chacón, Juvencio Galíndez-Mayer, Cleotilde Juárez-Ramírez, Nora Ruiz-Ordaz
Abstract:
Bendiocarb is a known toxic xenobiotic that presents acute and chronic risks for freshwater invertebrates and estuarine and marine biota; thus, the treatment of water contaminated with the insecticide is of concern. In this paper, a bacterial community with the capacity to grow in bendiocarb as its sole carbon and nitrogen source was isolated by enrichment techniques in batch culture, from samples of a composting plant located in the northeast of Mexico City. Eight cultivable bacteria were isolated from the microbial community, by PCR amplification of 16 rDNA; Pseudoxanthomonas spadix (NC_016147.2, 98%), Ochrobacterium anthropi (NC_009668.1, 97%), Staphylococcus capitis (NZ_CP007601.1, 99%), Bosea thiooxidans. (NZ_LMAR01000067.1, 99%), Pseudomonas denitrificans. (NC_020829.1, 99%), Agromyces sp. (NZ_LMKQ01000001.1, 98%), Bacillus thuringiensis. (NC_022873.1, 97%), Pseudomonas alkylphenolia (NZ_CP009048.1, 98%). NCBI accession numbers and percentage of similarity are indicated in parentheses. These bacteria were regarded as the isolated species for having the best similarity matches. The ability to degrade bendiocarb by the immobilized bacterial community in a packed bed biofilm reactor, using as support volcanic stone fragments (tezontle), was evaluated. The reactor system was operated in batch using mineral salts medium and 30 mg/L of bendiocarb as carbon and nitrogen source. With this system, an overall removal efficiency (ηbend) rounding 90%, was reached.
Keywords: Bendiocarb, biodegradation, biofilm reactor, carbamate insecticide.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11507806 Categorical Data Modeling: Logistic Regression Software
Authors: Abdellatif Tchantchane
Abstract:
A Matlab based software for logistic regression is developed to enhance the process of teaching quantitative topics and assist researchers with analyzing wide area of applications where categorical data is involved. The software offers an option of performing stepwise logistic regression to select the most significant predictors. The software includes a feature to detect influential observations in data, and investigates the effect of dropping or misclassifying an observation on a predictor variable. The input data may consist either as a set of individual responses (yes/no) with the predictor variables or as grouped records summarizing various categories for each unique set of predictor variables' values. Graphical displays are used to output various statistical results and to assess the goodness of fit of the logistic regression model. The software recognizes possible convergence constraints when present in data, and the user is notified accordingly.
Keywords: Logistic regression, Matlab, Categorical data, Influential observation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18827805 Role of Association Rule Mining in Numerical Data Analysis
Authors: Sudhir Jagtap, Kodge B. G., Shinde G. N., Devshette P. M
Abstract:
Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. The numerical data analysis became key process in research and development of all the fields [6]. In this paper we have made an attempt to analyze the specified numerical patterns with reference to the association rule mining techniques with minimum confidence and minimum support mining criteria. The extracted rules and analyzed results are graphically demonstrated. Association rules are a simple but very useful form of data mining that describe the probabilistic co-occurrence of certain events within a database [7]. They were originally designed to analyze market-basket data, in which the likelihood of items being purchased together within the same transactions are analyzed.Keywords: Numerical data analysis, Data Mining, Association Rule Mining
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28617804 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures
Authors: Silvina Caíno-Lores, Jesús Carretero
Abstract:
Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14917803 A Generalised Relational Data Model
Authors: Georgia Garani
Abstract:
A generalised relational data model is formalised for the representation of data with nested structure of arbitrary depth. A recursive algebra for the proposed model is presented. All the operations are formally defined. The proposed model is proved to be a superset of the conventional relational model (CRM). The functionality and validity of the model is shown by a prototype implementation that has been undertaken in the functional programming language Miranda.Keywords: nested relations, recursive algebra, recursive nested operations, relational data model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1559