Search results for: big data actors roles
7379 Real Time Approach for Data Placement in Wireless Sensor Networks
Authors: Sanjeev Gupta, Mayank Dave
Abstract:
The issue of real-time and reliable report delivery is extremely important for taking effective decision in a real world mission critical Wireless Sensor Network (WSN) based application. The sensor data behaves differently in many ways from the data in traditional databases. WSNs need a mechanism to register, process queries, and disseminate data. In this paper we propose an architectural framework for data placement and management. We propose a reliable and real time approach for data placement and achieving data integrity using self organized sensor clusters. Instead of storing information in individual cluster heads as suggested in some protocols, in our architecture we suggest storing of information of all clusters within a cell in the corresponding base station. For data dissemination and action in the wireless sensor network we propose to use Action and Relay Stations (ARS). To reduce average energy dissipation of sensor nodes, the data is sent to the nearest ARS rather than base station. We have designed our architecture in such a way so as to achieve greater energy savings, enhanced availability and reliability.
Keywords: Cluster head, data reliability, real time communication, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18147378 Management of English Language Teaching in Higher Education
Authors: Vishal D. Pandya
Abstract:
A great deal of perceptible change has been taking place in the way our institutions of higher learning are being managed in India today. It is believed that managers, whose intuition proves to be accurate, often tend to be the most successful, and this is what makes them almost like entrepreneurs. A certain entrepreneurial spirit is what is expected and requires a degree of insight of the manager to be successful depending upon the situational and more importantly, the heterogeneity as well as the socio-cultural aspect. Teachers in Higher Education have to play multiple roles to make sure that the Learning-Teaching process becomes effective in the real sense of the term. This paper makes an effort to take a close look at that, especially in the context of the management of English language teaching in Higher Education and, therefore, focuses on the management of English language teaching in higher education by understanding target situation analyses at the socio-cultural level.
Keywords: Management, language teaching, English language teaching, higher education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18947377 Managerial Styles of Asian Executives: The Case of Thailand
Authors: Teerayout Wattanasupachoke
Abstract:
This research project is developed in order to study managerial styles of modern Thai executives. The thorough understanding will lead to continuous improvement and efficient performance of Thai business organizations. Regarding managerial skills, Thai executives focus heavily upon human skills. Also, the negotiator roles are most emphasis in their management. In addition, Thai executives pay most attention to the fundamental management principles including Harmony and Unity of Direction of the organizations. Moreover, the management techniques, consisting of Team work and Career Planning are of their main concern. Finally, Thai executives wish to enhance their firms- image and employees- morale through conducting the ethical and socially responsible activities. The major tactic deployed to stimulate employees- ethical behaviors and mindset is Code of Ethics development.Keywords: Management, Managerial Styles, Asian Executives, Thailand.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17167376 Data Mining in Medicine Domain Using Decision Trees and Vector Support Machine
Authors: Djamila Benhaddouche, Abdelkader Benyettou
Abstract:
In this paper, we used data mining to extract biomedical knowledge. In general, complex biomedical data collected in studies of populations are treated by statistical methods, although they are robust, they are not sufficient in themselves to harness the potential wealth of data. For that you used in step two learning algorithms: the Decision Trees and Support Vector Machine (SVM). These supervised classification methods are used to make the diagnosis of thyroid disease. In this context, we propose to promote the study and use of symbolic data mining techniques.
Keywords: A classifier, Algorithms decision tree, knowledge extraction, Support Vector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18707375 Leadership´s Controlling via Complexity Investigation in Crisis Scenarios
Authors: Jiří Barta, Oldřich Svoboda, Jiří. F. Urbánek
Abstract:
In this paper will be discussed two coin´s sides of crisis scenarios dynamics. On the one's side is negative role of subsidiary scenario branches in its compactness weakening by means unduly chaotic atomizing, having many interactive feedbacks cases, increasing a value of a complexity here. This negative role reflects the complexity of use cases, weakening leader compliancy, which brings something as a ´readiness for controlling capabilities provision´. Leader´s dissatisfaction has zero compliancy, but factual it is a ´crossbar´ (interface in fact) between planning and executing use cases. On the other side of this coin, an advantage of rich scenarios embranchment is possible to see in a support of response awareness, readiness, preparedness, adaptability, creativity and flexibility. Here rich scenarios embranchment contributes to the steadiness and resistance of scenario mission actors. These all will be presented in live power-points ´Blazons´, modelled via DYVELOP (Dynamic Vector Logistics of Processes) on the Conference.
Keywords: Leadership, Controlling, Complexity, DYVELOP, Scenarios.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20077374 A Software Framework for Predicting Oil-Palm Yield from Climate Data
Authors: Mohd. Noor Md. Sap, A. Majid Awan
Abstract:
Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19797373 A Proposal for U-City (Smart City) Service Method Using Real-Time Digital Map
Authors: SangWon Han, MuWook Pyeon, Sujung Moon, DaeKyo Seo
Abstract:
Recently, technologies based on three-dimensional (3D) space information are being developed and quality of life is improving as a result. Research on real-time digital map (RDM) is being conducted now to provide 3D space information. RDM is a service that creates and supplies 3D space information in real time based on location/shape detection. Research subjects on RDM include the construction of 3D space information with matching image data, complementing the weaknesses of image acquisition using multi-source data, and data collection methods using big data. Using RDM will be effective for space analysis using 3D space information in a U-City and for other space information utilization technologies.
Keywords: RDM, multi-source data, big data, U-City.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8057372 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-
Authors: Nieto Bernal Wilson, Carmona Suarez Edgar
Abstract:
The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects. Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.
Keywords: Data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14787371 Distributed Data-Mining by Probability-Based Patterns
Authors: M. Kargar, F. Gharbalchi
Abstract:
In this paper a new method is suggested for distributed data-mining by the probability patterns. These patterns use decision trees and decision graphs. The patterns are cared to be valid, novel, useful, and understandable. Considering a set of functions, the system reaches to a good pattern or better objectives. By using the suggested method we will be able to extract the useful information from massive and multi-relational data bases.Keywords: Data-mining, Decision tree, Decision graph, Pattern, Relationship.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15557370 Determination and Comparison of Some Elements in Different Types of Orange Juices and Investigation of Health Effects
Authors: F. Demir, A. S. Kipcak, O. Dere Ozdemir, E. Moroydor Derun, S. Piskin
Abstract:
Fruit juices play important roles in human health as being a key part of nutrition. Juice and nectar are two categories of drinks with so many variations for consumers, regardless of age, lifestyle and taste preferences, which they can find their favorites. Juices contain 100% pulp when pulp content of ‘nectar’ changes between 25%-50%. In this study, potassium (K), magnesium (Mg), and phosphorus (P) contents in orange juice and nectar is determined for conscious consumption. For this purpose inductively coupled plasma optical emission spectrometry (ICP-OES) is used to find out potassium (K), magnesium (Mg), and phosphorus (P) contents in orange juices and nectar. Furthermore, the daily intake of elements from orange juice and nectar that affects human health is also investigated. From the results of experiments K, Mg and P contents are found in orange juice as 1351; 73,25; 89,27 ppm and in orange nectar as 986; 33,76; 51,30 respectively.Keywords: Orange juice, nectar, ICP-OES, element.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26377369 K-Means for Spherical Clusters with Large Variance in Sizes
Authors: A. M. Fahim, G. Saake, A. M. Salem, F. A. Torkey, M. A. Ramadan
Abstract:
Data clustering is an important data exploration technique with many applications in data mining. The k-means algorithm is well known for its efficiency in clustering large data sets. However, this algorithm is suitable for spherical shaped clusters of similar sizes and densities. The quality of the resulting clusters decreases when the data set contains spherical shaped with large variance in sizes. In this paper, we introduce a competent procedure to overcome this problem. The proposed method is based on shifting the center of the large cluster toward the small cluster, and recomputing the membership of small cluster points, the experimental results reveal that the proposed algorithm produces satisfactory results.Keywords: K-Means, Data Clustering, Cluster Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32817368 Representing Data without Lost Compression Properties in Time Series: A Review
Authors: Nabilah Filzah Mohd Radzuan, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan
Abstract:
Uncertain data is believed to be an important issue in building up a prediction model. The main objective in the time series uncertainty analysis is to formulate uncertain data in order to gain knowledge and fit low dimensional model prior to a prediction task. This paper discusses the performance of a number of techniques in dealing with uncertain data specifically those which solve uncertain data condition by minimizing the loss of compression properties.
Keywords: Compression properties, uncertainty, uncertain time series, mining technique, weather prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16207367 Are XBRL-based Financial Reports Better than Non-XBRL Reports? A Quality Assessment
Authors: Zhenkun Wang, Simon S. Gao
Abstract:
Using a scoring system, this paper provides a comparative assessment of the quality of data between XBRL formatted financial reports and non-XBRL financial reports. It shows a major improvement in the quality of data of XBRL formatted financial reports. Although XBRL formatted financial reports do not show much advantage in the quality at the beginning, XBRL financial reports lately display a large improvement in the quality of data in almost all aspects. With the improved XBRL web data managing, presentation and analysis applications, XBRL formatted financial reports have a much better accessibility, are more accurate and better in timeliness.Keywords: Data Quality; Financial Report; Information; XBRL
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25667366 Modeling of Random Variable with Digital Probability Hyper Digraph: Data-Oriented Approach
Authors: A. Habibizad Navin, M. Naghian Fesharaki, M. Mirnia, M. Kargar
Abstract:
In this paper we introduce Digital Probability Hyper Digraph for modeling random variable as the hierarchical data-oriented model.Keywords: Data-Oriented Models, Data Structure, DigitalProbability Hyper Digraph, Random Variable, Statistic andProbability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12737365 Use of Plant Antimicrobials for Food Preservation
Authors: Oladotun A. Fatoki, Deborah A. Onifade
Abstract:
Spoilage occurs in plant produce due to the action of field and storage microorganisms. The conditions of storage can also cause physiological spoilage. Various methods exist to ensure that these food substances maintain their quality long after harvesting. However, many of these methods either fail to keep the plant for the required period or predispose the plant to other spoilage risks. The major shortcoming posed by the use of many antimicrobials is the chemical residues it deposits in the food substance. The use of plants in preservation has been in use for a long period, though little understood then, it served its purposes. A better understanding of the roles of these plant parts in increasing the shelf life of farm produce has helped in the creation of more effective and safer means of pest and microbial control. This can be extended to plants that have not been used for these purposes initially. Microbial sources should also be investigated as these have provided cheaper sources of secondary metabolites.
Keywords: Antimicrobials, Food preservation, Phytochemicals
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40187364 Ontology-Navigated Tutoring System for Flipped-Mastery Model
Authors: Masao Okabe
Abstract:
Nowadays, in Japan, variety of students get into a university and one of the main roles of introductory courses for freshmen is to make such students well prepared for subsequent intermediate courses. For that purpose, the flipped-mastery model is not enough because videos usually used in a flipped classroom is not adaptive and does not fit all freshmen with different academic performances. This paper proposes an ontology-navigated tutoring system called EduGraph. Using EduGraph, students can prepare for and review a class, in a more flexibly personalizable way than by videos. Structuralizing learning materials by its ontology, EduGraph also helps students integrate what they learn as knowledge, and makes learning materials sharable. EduGraph was used for an introductory course for freshmen. This application suggests that EduGraph is effective.
Keywords: Adaptive e-learning, flipped classroom, mastery learning, ontology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9957363 Wireless Transmission of Big Data Using Novel Secure Algorithm
Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha
Abstract:
This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.Keywords: Big data, cooperative jamming, energy balance, physical layer, two-hop transmission, wireless security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21807362 The Influence of Knowledge Transfer on Outputs of Innovative Process – Case Study of Czech Regions
Authors: J. Stejskal, P. Hajek
Abstract:
The goal of this article is the analysis of knowledge transfer at the regional level of the Czech Republic. We show how goals of enterprises´ innovative activities are related to the rate of cooperation with different actors within regional innovative systems as well as in other world regions. The results show that the most important partners of enterprises are their suppliers and clients in most Czech regions. The cooperation rate of enterprises correlates significantly mainly with enterprises´ efforts to enter new markets and reduce labour costs per unit output. The meaning of this cooperation decreases with the increase of partner’s distance. Regarding the type of a cooperating partner, cooperation within an enterprise had to do with the increase of market share and decrease of labour costs. On the other hand, cooperation with clients had to do with efforts to replace outdated products or processes or enter new markets. We can pay less attention to the cooperation with government authorities and organizations. The reasons for marginalization of this cooperation should be submitted to further detailed investigation.
Keywords: Knowledge, transfer, innovative process, Czech republic, region.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15367361 Public R and D Risk and Risk Management Policy
Authors: Youngseok Lee, Dongjin Chung, Youngjin Kim
Abstract:
R&D risk management has been suggested as one of the management approaches for accomplishing the goals of public R&D investment. The investment in basic science and core technology development is the essential roles of government for securing the social base needed for continuous economic growth. And, it is also an important role of the science and technology policy sectors to generate a positive environment in which the outcomes of public R&D can be diffused in a stable fashion by controlling the uncertainties and risk factors in advance that may arise during the application of such achievements to society and industry. Various policies have already been implemented to manage uncertainties and variables that may have negative impact on accomplishing public R& investment goals. But we may derive new policy measures for complementing the existing policies and for exploring progress direction by analyzing them in a policy package from the viewpoint of R&D risk management.Keywords: Risk management, Public R&D policy, Science andtechnology policy, Performance management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16567360 Requirements Engineering via Controlling Actors Definition for the Organizations of European Critical Infrastructure
Authors: Jiri F. Urbanek, Jiri Barta, Oldrich Svoboda, Jiri J. Urbanek
Abstract:
The organizations of European and Czech critical infrastructure have specific position, mission, characteristics and behaviour in European Union and Czech state/business environments, regarding specific requirements for regional and global security environments. They must respect policy of national security and global rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. A controlling is generalized capability to have control over situational policy. This paper aims and purposes are to introduce the controlling as quite new necessary process attribute providing for critical infrastructure is environment the capability and profit to achieve its commitment regarding to the effectiveness of the quality management system in meeting customer/ user requirements and also the continual improvement of critical infrastructure organization’s processes overall performance and efficiency, as well as its societal security via continual planning improvement via DYVELOP modelling.
Keywords: Added Value, DYVELOP, Controlling, Environments, Process Approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17567359 Study of Efficiency and Capability LZW++ Technique in Data Compression
Authors: Yusof. Mohd Kamir, Mat Deris. Mohd Sufian, Abidin. Ahmad Faisal Amri
Abstract:
The purpose of this paper is to show efficiency and capability LZWµ in data compression. The LZWµ technique is enhancement from existing LZW technique. The modification the existing LZW is needed to produce LZWµ technique. LZW read one by one character at one time. Differ with LZWµ technique, where the LZWµ read three characters at one time. This paper focuses on data compression and tested efficiency and capability LZWµ by different data format such as doc type, pdf type and text type. Several experiments have been done by different types of data format. The results shows LZWµ technique is better compared to existing LZW technique in term of file size.
Keywords: Data Compression, Huffman Encoding, LZW, LZWµ, RLL, Size.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20897358 Ethiopian Opposition Political Parties and Rebel Fronts: Past and Present
Authors: Wondwosen Teshome B.
Abstract:
In a representative democracy political parties promote vital competition on different policy issues and play essential roles by offering ideological alternatives. They also give channels for citizens- participation in government decision-making processes and they are significant conduits and interpreters of information about government. This paper attempts to examine how opposition political parties and rebel fronts emerged in Ethiopia, and examines their present conditions. In this paper, selected case studies of political parties and rebel fronts are included to highlight the status and the role of opposition groups in the country in the three successive administrations: Haile Selassie (1930-1974), Derg (1974- 1991), and EPRDF (1991-Present).Keywords: Ethiopia, Hybrid regime, Incumbent, Multi-Partyelection, Opposition Party, Political Party, Rebel fronts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 53757357 A Learner-Centred or Artefact-Centred Classroom? Impact of Technology, Artefacts, and Environment on Task Processes in an English as a Foreign Language Classroom
Authors: Nobue T. Ellis
Abstract:
This preliminary study attempts to see if a learning environment influences instructor’s teaching strategies and learners’ in-class activities in a foreign language class at a university in Japan. The class under study was conducted in a computer room, while the majority of classes of the same course were offered in traditional classrooms without computers. The study also sees if the unplanned blended learning environment, enhanced, or worked against, in achieving course goals, by paying close attention to in-class artefacts, such as computers. In the macro-level analysis, the course syllabus and weekly itinerary of the course were looked at; and in the microlevel analysis, nonhuman actors in their environments were named and analyzed to see how they influenced the learners’ task processes. The result indicated that students were heavily influenced by the presence of computers, which lead them to disregard some aspects of intended learning objectives.
Keywords: Computer-assisted language learning, actor-network theory, English as a foreign language, task-based teaching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16107356 Impact of Stack Caches: Locality Awareness and Cost Effectiveness
Authors: Abdulrahman K. Alshegaifi, Chun-Hsi Huang
Abstract:
Treating data based on its location in memory has received much attention in recent years due to its different properties, which offer important aspects for cache utilization. Stack data and non-stack data may interfere with each other’s locality in the data cache. One of the important aspects of stack data is that it has high spatial and temporal locality. In this work, we simulate non-unified cache design that split data cache into stack and non-stack caches in order to maintain stack data and non-stack data separate in different caches. We observe that the overall hit rate of non-unified cache design is sensitive to the size of non-stack cache. Then, we investigate the appropriate size and associativity for stack cache to achieve high hit ratio especially when over 99% of accesses are directed to stack cache. The result shows that on average more than 99% of stack cache accuracy is achieved by using 2KB of capacity and 1-way associativity. Further, we analyze the improvement in hit rate when adding small, fixed, size of stack cache at level1 to unified cache architecture. The result shows that the overall hit rate of unified cache design with adding 1KB of stack cache is improved by approximately, on average, 3.9% for Rijndael benchmark. The stack cache is simulated by using SimpleScalar toolset.
Keywords: Hit rate, Locality of program, Stack cache, and Stack data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15087355 Cross Project Software Fault Prediction at Design Phase
Authors: Pradeep Singh, Shrish Verma
Abstract:
Software fault prediction models are created by using the source code, processed metrics from the same or previous version of code and related fault data. Some company do not store and keep track of all artifacts which are required for software fault prediction. To construct fault prediction model for such company, the training data from the other projects can be one potential solution. Earlier we predicted the fault the less cost it requires to correct. The training data consists of metrics data and related fault data at function/module level. This paper investigates fault predictions at early stage using the cross-project data focusing on the design metrics. In this study, empirical analysis is carried out to validate design metrics for cross project fault prediction. The machine learning techniques used for evaluation is Naïve Bayes. The design phase metrics of other projects can be used as initial guideline for the projects where no previous fault data is available. We analyze seven datasets from NASA Metrics Data Program which offer design as well as code metrics. Overall, the results of cross project is comparable to the within company data learning.Keywords: Software Metrics, Fault prediction, Cross project, Within project.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25467354 Extreme Temperature Forecast in Mbonge, Cameroon through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution
Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph
Abstract:
In this paper, temperature extremes are forecast by employing the block maxima method of the Generalized extreme value(GEV) distribution to analyse temperature data from the Cameroon Development Corporation (C.D.C). By considering two sets of data (Raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data while in the simulated data, the return values show an increasing trend but with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend but with an upper bound. This clearly shows that temperatures in the tropics even-though show a sign of increasing in the future, there is a maximum temperature at which there is no exceedence. The results of this paper are very vital in Agricultural and Environmental research.Keywords: Return level, Generalized extreme value (GEV), Meteorology, Forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21067353 Mining Multicity Urban Data for Sustainable Population Relocation
Authors: Xu Du, Aparna S. Varde
Abstract:
In this research, we propose to conduct diagnostic and predictive analysis about the key factors and consequences of urban population relocation. To achieve this goal, urban simulation models extract the urban development trends as land use change patterns from a variety of data sources. The results are treated as part of urban big data with other information such as population change and economic conditions. Multiple data mining methods are deployed on this data to analyze nonlinear relationships between parameters. The result determines the driving force of population relocation with respect to urban sprawl and urban sustainability and their related parameters. This work sets the stage for developing a comprehensive urban simulation model for catering to specific questions by targeted users. It contributes towards achieving sustainability as a whole.Keywords: Data Mining, Environmental Modeling, Sustainability, Urban Planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17837352 An Ant-based Clustering System for Knowledge Discovery in DNA Chip Analysis Data
Authors: Minsoo Lee, Yun-mi Kim, Yearn Jeong Kim, Yoon-kyung Lee, Hyejung Yoon
Abstract:
Biological data has several characteristics that strongly differentiate it from typical business data. It is much more complex, usually large in size, and continuously changes. Until recently business data has been the main target for discovering trends, patterns or future expectations. However, with the recent rise in biotechnology, the powerful technology that was used for analyzing business data is now being applied to biological data. With the advanced technology at hand, the main trend in biological research is rapidly changing from structural DNA analysis to understanding cellular functions of the DNA sequences. DNA chips are now being used to perform experiments and DNA analysis processes are being used by researchers. Clustering is one of the important processes used for grouping together similar entities. There are many clustering algorithms such as hierarchical clustering, self-organizing maps, K-means clustering and so on. In this paper, we propose a clustering algorithm that imitates the ecosystem taking into account the features of biological data. We implemented the system using an Ant-Colony clustering algorithm. The system decides the number of clusters automatically. The system processes the input biological data, runs the Ant-Colony algorithm, draws the Topic Map, assigns clusters to the genes and displays the output. We tested the algorithm with a test data of 100 to1000 genes and 24 samples and show promising results for applying this algorithm to clustering DNA chip data.
Keywords: Ant colony system, biological data, clustering, DNA chip.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19747351 The Resource Description Framework (RDF) as a Modern Structure for Medical Data
Authors: Gabriela Lindemann, Danilo Schmidt, Thomas Schrader, Dietmar Keune
Abstract:
The amount and heterogeneity of data in biomedical research, notably in interdisciplinary fields, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available but come out of distributed resources. The Charité - University Hospital Berlin has established together with the German Research Foundation (DFG) a new information service centre for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC). Beside a collaborative aspect to create new research groups every single partner or institution of this science information centre making his own data available is allowed to search the whole data pool of the various involved centres. A core task is the implementation of a non-restricting open data structure for the various different data sources. We decided to use a modern RDF model and in a first phase transformed original data coming from the web-based Electronic Patient Record database TBase©.
Keywords: Medical databases, Resource Description Framework (RDF), metadata repository.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20317350 XML Data Management in Compressed Relational Database
Authors: Hongzhi Wang, Jianzhong Li, Hong Gao
Abstract:
XML is an important standard of data exchange and representation. As a mature database system, using relational database to support XML data may bring some advantages. But storing XML in relational database has obvious redundancy that wastes disk space, bandwidth and disk I/O when querying XML data. For the efficiency of storage and query XML, it is necessary to use compressed XML data in relational database. In this paper, a compressed relational database technology supporting XML data is presented. Original relational storage structure is adaptive to XPath query process. The compression method keeps this feature. Besides traditional relational database techniques, additional query process technologies on compressed relations and for special structure for XML are presented. In this paper, technologies for XQuery process in compressed relational database are presented..Keywords: XML, compression, query processing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1806