Search results for: data infrastructure
25139 Health Transformation Program and Effects on Health Expenditures
Authors: Zeynep Karacor, Rahime Hulya Ozturk
Abstract:
In recent years, the rise of population density and the problem of aging population took attention to the health expenditures. In Turkey, some regulations and infrastructure changes in health sector have occurred. These changes are called Health Transformation Program. The productivity of health services, patient satisfaction, quality of services are tried to be improved with this program. Some radical changes are applied in Turkish economy in this context. The aim of this paper is to present the effects of Health Transformation Program on health expenditures. In the first part of the paper, some information’s about health system and applications in Turkey are discussed. In the second part, the aims of Health Transformation Program are explained. And in the third part the effects of Health Transformation Program on health expenditures are examined.Keywords: health transformation program, Turkey, health services, health expenditures
Procedia PDF Downloads 40025138 Using Vulnerability to Reduce False Positive Rate in Intrusion Detection Systems
Authors: Nadjah Chergui, Narhimene Boustia
Abstract:
Intrusion Detection Systems are an essential tool for network security infrastructure. However, IDSs have a serious problem which is the generating of massive number of alerts, most of them are false positive ones which can hide true alerts and make the analyst confused to analyze the right alerts for report the true attacks. The purpose behind this paper is to present a formalism model to perform correlation engine by the reduction of false positive alerts basing on vulnerability contextual information. For that, we propose a formalism model based on non-monotonic JClassicδє description logic augmented with a default (δ) and an exception (є) operator that allows a dynamic inference according to contextual information.Keywords: context, default, exception, vulnerability
Procedia PDF Downloads 26125137 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 11125136 Check Factors Contributing to the Increase or Decrease in Labor Productivity in Employees Applied Science Center Municipal Andimeshk
Authors: Hossein Boromandfar, Ahmad Ghalavandi
Abstract:
This paper examines the importance of human resources as a strategic resource and the factors that lead to increased Labor productivity in Applied Science Center Andimeshk pay. First, the concepts and definitions of productivity and factors affecting it, and then determine the center Recommendations for improving the productivity of the university at a high level its improvement. What leads to increased productivity of labor is worth. The most competent human resources infrastructure is set, because by moving towards the development and promotion. The use of qualified employees in the university with a focus on specific objectives can be effective on its promotion.Keywords: productivity, manage, human resources, center for applied science
Procedia PDF Downloads 42425135 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines
Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma
Abstract:
Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.Keywords: support vector mechanism (SVM), machine learning (ML), support vector machines (SVM), department of transportation (DFT)
Procedia PDF Downloads 28025134 A Relational Data Base for Radiation Therapy
Authors: Raffaele Danilo Esposito, Domingo Planes Meseguer, Maria Del Pilar Dorado Rodriguez
Abstract:
As far as we know, it is still unavailable a commercial solution which would allow to manage, openly and configurable up to user needs, the huge amount of data generated in a modern Radiation Oncology Department. Currently, available information management systems are mainly focused on Record & Verify and clinical data, and only to a small extent on physical data. Thus, results in a partial and limited use of the actually available information. In the present work we describe the implementation at our department of a centralized information management system based on a web server. Our system manages both information generated during patient planning and treatment, and information of general interest for the whole department (i.e. treatment protocols, quality assurance protocols etc.). Our objective it to be able to analyze in a simple and efficient way all the available data and thus to obtain quantitative evaluations of our treatments. This would allow us to improve our work flow and protocols. To this end we have implemented a relational data base which would allow us to use in a practical and efficient way all the available information. As always we only use license free software.Keywords: information management system, radiation oncology, medical physics, free software
Procedia PDF Downloads 24725133 A Study of Safety of Data Storage Devices of Graduate Students at Suan Sunandha Rajabhat University
Authors: Komol Phaisarn, Natcha Wattanaprapa
Abstract:
This research is a survey research with an objective to study the safety of data storage devices of graduate students of academic year 2013, Suan Sunandha Rajabhat University. Data were collected by questionnaire on the safety of data storage devices according to CIA principle. A sample size of 81 was drawn from population by purposive sampling method. The results show that most of the graduate students of academic year 2013 at Suan Sunandha Rajabhat University use handy drive to store their data and the safety level of the devices is at good level.Keywords: security, safety, storage devices, graduate students
Procedia PDF Downloads 35625132 Simulation of a Cost Model Response Requests for Replication in Data Grid Environment
Authors: Kaddi Mohammed, A. Benatiallah, D. Benatiallah
Abstract:
Data grid is a technology that has full emergence of new challenges, such as the heterogeneity and availability of various resources and geographically distributed, fast data access, minimizing latency and fault tolerance. Researchers interested in this technology address the problems of the various systems related to the industry such as task scheduling, load balancing and replication. The latter is an effective solution to achieve good performance in terms of data access and grid resources and better availability of data cost. In a system with duplication, a coherence protocol is used to impose some degree of synchronization between the various copies and impose some order on updates. In this project, we present an approach for placing replicas to minimize the cost of response of requests to read or write, and we implement our model in a simulation environment. The placement techniques are based on a cost model which depends on several factors, such as bandwidth, data size and storage nodes.Keywords: response time, query, consistency, bandwidth, storage capacity, CERN
Procedia PDF Downloads 27625131 Prompt Design for Code Generation in Data Analysis Using Large Language Models
Authors: Lu Song Ma Li Zhi
Abstract:
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.Keywords: large language models, prompt design, data analysis, code generation
Procedia PDF Downloads 4825130 Comparison of Different Methods to Produce Fuzzy Tolerance Relations for Rainfall Data Classification in the Region of Central Greece
Authors: N. Samarinas, C. Evangelides, C. Vrekos
Abstract:
The aim of this paper is the comparison of three different methods, in order to produce fuzzy tolerance relations for rainfall data classification. More specifically, the three methods are correlation coefficient, cosine amplitude and max-min method. The data were obtained from seven rainfall stations in the region of central Greece and refers to 20-year time series of monthly rainfall height average. Three methods were used to express these data as a fuzzy relation. This specific fuzzy tolerance relation is reformed into an equivalence relation with max-min composition for all three methods. From the equivalence relation, the rainfall stations were categorized and classified according to the degree of confidence. The classification shows the similarities among the rainfall stations. Stations with high similarity can be utilized in water resource management scenarios interchangeably or to augment data from one to another. Due to the complexity of calculations, it is important to find out which of the methods is computationally simpler and needs fewer compositions in order to give reliable results.Keywords: classification, fuzzy logic, tolerance relations, rainfall data
Procedia PDF Downloads 31825129 Customer Satisfaction and Effective HRM Policies: Customer and Employee Satisfaction
Authors: S. Anastasiou, C. Nathanailides
Abstract:
The purpose of this study is to examine the possible link between employee and customer satisfaction. The service provided by employees, help to build a good relationship with customers and can help at increasing their loyalty. Published data for job satisfaction and indicators of customer services were gathered from relevant published works which included data from five different countries. The reviewed data indicate a significant correlation between indicators of customer and employee satisfaction in the Banking sector. There was a significant correlation between the two parameters (Pearson correlation R2=0.52 P<0.05) The reviewed data provide evidence that there is some practical evidence which links these two parameters.Keywords: job satisfaction, job performance, customer’ service, banks, human resources management
Procedia PDF Downloads 32625128 The Application of Hellomac Rockfall Alert System in Rockfall Barriers: An Explainer
Authors: Kinjal Parmar, Matteo Lelli
Abstract:
The usage of IoT technology as a rockfall alert system is relatively new. This paper explains the potential of such an alert system called HelloMac from Maccaferri which provides transportation infrastructure asset owners the way to effectively utilize their resources in the detection of boulder impacts on rockfall barriers. This would ensure a faster assessment of the impacted barrier and subsequently facilitates the implementation of remedial works in an effective and timely manner. In addition, the HelloMac can also be integrated with another warning system to alert vehicle users of the unseen dangers ahead. HelloMac is developed to work also in remote areas, where cell coverage is not available. User gets notified when a rockfall even occurs via mobile app, SMS and email. Using such alarming systems effectively, we can reduce the risk of rockfall hazard.Keywords: rockfall, barrier, HelloMac, rockfall alert system
Procedia PDF Downloads 5625127 A Comparative Study between Japan and the European Union on Software Vulnerability Public Policies
Authors: Stefano Fantin
Abstract:
The present analysis outcomes from the research undertaken in the course of the European-funded project EUNITY, which targets the gaps in research and development on cybersecurity and privacy between Europe and Japan. Under these auspices, the research presents a study on the policy approach of Japan, the EU and a number of Member States of the Union with regard to the handling and discovery of software vulnerabilities, with the aim of identifying methodological differences and similarities. This research builds upon a functional comparative analysis of both public policies and legal instruments from the identified jurisdictions. The result of this analysis is based on semi-structured interviews with EUNITY partners, as well as by the participation of the researcher to a recent report from the Center for EU Policy Study on software vulnerability. The European Union presents a rather fragmented legal framework on software vulnerabilities. The presence of a number of different legislations at the EU level (including Network and Information Security Directive, Critical Infrastructure Directive, Directive on the Attacks at Information Systems and the Proposal for a Cybersecurity Act) with no clear focus on such a subject makes it difficult for both national governments and end-users (software owners, researchers and private citizens) to gain a clear understanding of the Union’s approach. Additionally, the current data protection reform package (general data protection regulation), seems to create legal uncertainty around security research. To date, at the member states level, a few efforts towards transparent practices have been made, namely by the Netherlands, France, and Latvia. This research will explain what policy approach such countries have taken. Japan has started implementing a coordinated vulnerability disclosure policy in 2004. To date, two amendments can be registered on the framework (2014 and 2017). The framework is furthermore complemented by a series of instruments allowing researchers to disclose responsibly any new discovery. However, the policy has started to lose its efficiency due to a significant increase in reports made to the authority in charge. To conclude, the research conducted reveals two asymmetric policy approaches, time-wise and content-wise. The analysis therein will, therefore, conclude with a series of policy recommendations based on the lessons learned from both regions, towards a common approach to the security of European and Japanese markets, industries and citizens.Keywords: cybersecurity, vulnerability, European Union, Japan
Procedia PDF Downloads 15925126 Evaluation of Australian Open Banking Regulation: Balancing Customer Data Privacy and Innovation
Authors: Suman Podder
Abstract:
As Australian ‘Open Banking’ allows customers to share their financial data with accredited Third-Party Providers (‘TPPs’), it is necessary to evaluate whether the regulators have achieved the balance between protecting customer data privacy and promoting data-related innovation. Recognising the need to increase customers’ influence on their own data, and the benefits of data-related innovation, the Australian Government introduced ‘Consumer Data Right’ (‘CDR’) to the banking sector through Open Banking regulation. Under Open Banking, TPPs can access customers’ banking data that allows the TPPs to tailor their products and services to meet customer needs at a more competitive price. This facilitated access and use of customer data will promote innovation by providing opportunities for new products and business models to emerge and grow. However, the success of Open Banking depends on the willingness of the customers to share their data, so the regulators have augmented the protection of data by introducing new privacy safeguards to instill confidence and trust in the system. The dilemma in policymaking is that, on the one hand, lenient data privacy laws will help the flow of information, but at the risk of individuals’ loss of privacy, on the other hand, stringent laws that adequately protect privacy may dissuade innovation. Using theoretical and doctrinal methods, this paper examines whether the privacy safeguards under Open Banking will add to the compliance burden of the participating financial institutions, resulting in the undesirable effect of stifling other policy objectives such as innovation. The contribution of this research is three-fold. In the emerging field of customer data sharing, this research is one of the few academic studies on the objectives and impact of Open Banking in the Australian context. Additionally, Open Banking is still in the early stages of implementation, so this research traces the evolution of Open Banking through policy debates regarding the desirability of customer data-sharing. Finally, the research focuses not only on the customers’ data privacy and juxtaposes it with another important objective of promoting innovation, but it also highlights the critical issues facing the data-sharing regime. This paper argues that while it is challenging to develop a regulatory framework for protecting data privacy without impeding innovation and jeopardising yet unknown opportunities, data privacy and innovation promote different aspects of customer welfare. This paper concludes that if a regulation is appropriately designed and implemented, the benefits of data-sharing will outweigh the cost of compliance with the CDR.Keywords: consumer data right, innovation, open banking, privacy safeguards
Procedia PDF Downloads 14425125 Generation of Automated Alarms for Plantwide Process Monitoring
Authors: Hyun-Woo Cho
Abstract:
Earlier detection of incipient abnormal operations in terms of plant-wide process management is quite necessary in order to improve product quality and process safety. And generating warning signals or alarms for operating personnel plays an important role in process automation and intelligent plant health monitoring. Various methodologies have been developed and utilized in this area such as expert systems, mathematical model-based approaches, multivariate statistical approaches, and so on. This work presents a nonlinear empirical monitoring methodology based on the real-time analysis of massive process data. Unfortunately, the big data includes measurement noises and unwanted variations unrelated to true process behavior. Thus the elimination of such unnecessary patterns of the data is executed in data processing step to enhance detection speed and accuracy. The performance of the methodology was demonstrated using simulated process data. The case study showed that the detection speed and performance was improved significantly irrespective of the size and the location of abnormal events.Keywords: detection, monitoring, process data, noise
Procedia PDF Downloads 25525124 Meanings and Concepts of Standardization in Systems Medicine
Authors: Imme Petersen, Wiebke Sick, Regine Kollek
Abstract:
In systems medicine, high-throughput technologies produce large amounts of data on different biological and pathological processes, including (disturbed) gene expressions, metabolic pathways and signaling. The large volume of data of different types, stored in separate databases and often located at different geographical sites have posed new challenges regarding data handling and processing. Tools based on bioinformatics have been developed to resolve the upcoming problems of systematizing, standardizing and integrating the various data. However, the heterogeneity of data gathered at different levels of biological complexity is still a major challenge in data analysis. To build multilayer disease modules, large and heterogeneous data of disease-related information (e.g., genotype, phenotype, environmental factors) are correlated. Therefore, a great deal of attention in systems medicine has been put on data standardization, primarily to retrieve and combine large, heterogeneous datasets into standardized and incorporated forms and structures. However, this data-centred concept of standardization in systems medicine is contrary to the debate in science and technology studies (STS) on standardization that rather emphasizes the dynamics, contexts and negotiations of standard operating procedures. Based on empirical work on research consortia that explore the molecular profile of diseases to establish systems medical approaches in the clinic in Germany, we trace how standardized data are processed and shaped by bioinformatics tools, how scientists using such data in research perceive such standard operating procedures and which consequences for knowledge production (e.g. modeling) arise from it. Hence, different concepts and meanings of standardization are explored to get a deeper insight into standard operating procedures not only in systems medicine, but also beyond.Keywords: data, science and technology studies (STS), standardization, systems medicine
Procedia PDF Downloads 34525123 Integrated On-Board Diagnostic-II and Direct Controller Area Network Access for Vehicle Monitoring System
Authors: Kavian Khosravinia, Mohd Khair Hassan, Ribhan Zafira Abdul Rahman, Syed Abdul Rahman Al-Haddad
Abstract:
The CAN (controller area network) bus is introduced as a multi-master, message broadcast system. The messages sent on the CAN are used to communicate state information, referred as a signal between different ECUs, which provides data consistency in every node of the system. OBD-II Dongles that are based on request and response method is the wide-spread solution for extracting sensor data from cars among researchers. Unfortunately, most of the past researches do not consider resolution and quantity of their input data extracted through OBD-II technology. The maximum feasible scan rate is only 9 queries per second which provide 8 data points per second with using ELM327 as well-known OBD-II dongle. This study aims to develop and design a programmable, and latency-sensitive vehicle data acquisition system that improves the modularity and flexibility to extract exact, trustworthy, and fresh car sensor data with higher frequency rates. Furthermore, the researcher must break apart, thoroughly inspect, and observe the internal network of the vehicle, which may cause severe damages to the expensive ECUs of the vehicle due to intrinsic vulnerabilities of the CAN bus during initial research. Desired sensors data were collected from various vehicles utilizing Raspberry Pi3 as computing and processing unit with using OBD (request-response) and direct CAN method at the same time. Two types of data were collected for this study. The first, CAN bus frame data that illustrates data collected for each line of hex data sent from an ECU and the second type is the OBD data that represents some limited data that is requested from ECU under standard condition. The proposed system is reconfigurable, human-readable and multi-task telematics device that can be fitted into any vehicle with minimum effort and minimum time lag in the data extraction process. The standard operational procedure experimental vehicle network test bench is developed and can be used for future vehicle network testing experiment.Keywords: CAN bus, OBD-II, vehicle data acquisition, connected cars, telemetry, Raspberry Pi3
Procedia PDF Downloads 21225122 Big Data in Construction Project Management: The Colombian Northeast Case
Authors: Sergio Zabala-Vargas, Miguel Jiménez-Barrera, Luz VArgas-Sánchez
Abstract:
In recent years, information related to project management in organizations has been increasing exponentially. Performance data, management statistics, indicator results have forced the collection, analysis, traceability, and dissemination of project managers to be essential. In this sense, there are current trends to facilitate efficient decision-making in emerging technology projects, such as: Machine Learning, Data Analytics, Data Mining, and Big Data. The latter is the most interesting in this project. This research is part of the thematic line Construction methods and project management. Many authors present the relevance that the use of emerging technologies, such as Big Data, has taken in recent years in project management in the construction sector. The main focus is the optimization of time, scope, budget, and in general mitigating risks. This research was developed in the northeastern region of Colombia-South America. The first phase was aimed at diagnosing the use of emerging technologies (Big-Data) in the construction sector. In Colombia, the construction sector represents more than 50% of the productive system, and more than 2 million people participate in this economic segment. The quantitative approach was used. A survey was applied to a sample of 91 companies in the construction sector. Preliminary results indicate that the use of Big Data and other emerging technologies is very low and also that there is interest in modernizing project management. There is evidence of a correlation between the interest in using new data management technologies and the incorporation of Building Information Modeling BIM. The next phase of the research will allow the generation of guidelines and strategies for the incorporation of technological tools in the construction sector in Colombia.Keywords: big data, building information modeling, tecnology, project manamegent
Procedia PDF Downloads 13325121 Simulation of a Sustainable Irrigation System Development: The Case of Sitio Kantaling Village Farm Lands, Danao City, Cebu, Philippines
Authors: Amando A. Radomes Jr., LLoyd Jun Benjamin T. Embernatre, Cherssy Kaye F. Eviota, Krizia Allyn L. Nunez, Jose Thaddeus B. Roble III
Abstract:
Sitio Kantaling is one of the 34 villages in Danao City, Cebu, in the central Philippines. As of 2015, the eight households in the mountainous village extending over 40 hectares of land area, including 12 hectares of arable land, are the source of over a fifth of the agricultural products that go into the city. Over the years, however, the local government had been concerned with the decline in agricultural productivity because increasing number of residents are migrating into the urban areas of the region to look for better employment opportunities. One of the major reasons for the agricultural productivity decline is underdeveloped irrigation infrastructure. The local government had partnered with the University of San Carlos to conduct research on developing an irrigation system that could sustainably meet both agricultural and household consumption needs. From a macro-perspective, a dynamic simulation model was developed to understand the long-term behavior of the status quo and proposed the system. Data on population, water supply and demand, household income, and urban migration were incorporated in the 20-year horizon model. The study also developed a smart irrigation system design. Instead of using electricity to pump water, a network of aqueducts with three main nodes had been designed and strategically located to take advantage of gravity to transport water from a spring. Simulation results showed that implementing a sustainable irrigation system would be able to significantly contribute to the socio-economic progress of the local community.Keywords: agriculture, aqueduct, simulation, sustainable irrigation system
Procedia PDF Downloads 17225120 Minimum Data of a Speech Signal as Special Indicators of Identification in Phonoscopy
Authors: Nazaket Gazieva
Abstract:
Voice biometric data associated with physiological, psychological and other factors are widely used in forensic phonoscopy. There are various methods for identifying and verifying a person by voice. This article explores the minimum speech signal data as individual parameters of a speech signal. Monozygotic twins are believed to be genetically identical. Using the minimum data of the speech signal, we came to the conclusion that the voice imprint of monozygotic twins is individual. According to the conclusion of the experiment, we can conclude that the minimum indicators of the speech signal are more stable and reliable for phonoscopic examinations.Keywords: phonogram, speech signal, temporal characteristics, fundamental frequency, biometric fingerprints
Procedia PDF Downloads 14725119 Mountain Architectural Design Under the Concept of Pedestrian-oriented Cities: Taking Bai Xiang Ju as an Example
Authors: Xiaohan Wang
Abstract:
In the rapid urbanization process, urban design concepts are shifting towards people-oriented approaches. Emphasizing the pedestrian experience promotes urban livability and sustainable development. In mountainous cities, the inconvenience of transportation caused by complex terrain makes pedestrian-friendly architectural design one of the entry points for problem-solving. This paper mainly takes the high-rise residential area of Bai Xiang Ju in Chongqing as an example to explore the architectural design strategies of mountainous cities under the concept of pedestrian-oriented urban design, providing valuable references and insights for similar urban architectural designs.Keywords: pedestrian city, Bai Xiang Ju, mountain architecture, pedestrian infrastructure, pedestrian-oriented design
Procedia PDF Downloads 1325118 Performance Evaluation of DSR and OLSR Routing Protocols in MANET Using Varying Pause Time
Authors: Yassine Meraihi, Dalila Acheli, Rabah Meraihi
Abstract:
MANET for Mobile Ad hoc NETwork is a collection of wireless mobile nodes that communicates with each other without using any existing infrastructure, access point or centralized administration, due to the higher mobility and limited radio transmission range, routing is an important issue in ad hoc network, so in order to ensure reliable and efficient route between to communicating nodes quickly, an appropriate routing protocol is needed. In this paper, we present the performance analysis of two mobile ad hoc network routing protocols namely DSR and OLSR using NS2.34, the performance is determined on the basis of packet delivery ratio, throughput, average jitter and end to end delay with varying pause time.Keywords: DSR, OLSR, quality of service, routing protocols, MANET
Procedia PDF Downloads 55625117 Covid -19 Pandemic and Impact on Public Spaces of Tourism and Hospitality in Dubai- an Exploratory Study from a Design Perspective
Authors: Manju Bala Jassi
Abstract:
The Covid 19 pandemic has badly mauled Dubai’s GDP heavily dependent on hospitality, tourism, entertainment, logistics, property and the retail sectors. In the context of the World Health protocols on social distancing for better maintenance of health and hygiene, the revival of the battered tourism and hospitality sectors has serious lessons for designers- interiors and public places. The tangible and intangible aesthetic elements and design –ambiance, materials, furnishings, colors, lighting and interior with architectural design issues of tourism and hospitality need a rethink to ensure a memorable tourist experience. Designers ought to experiment with sustainable places of tourism and design, develop, build and projects are aesthetic and leave as little negative impacts on the environment and public as possible. In short, they ought to conceive public spaces that makes use of little untouched materials and energy, and creates pollution and waste that are minimal. The spaces can employ healthier and more resource-efficient prototypes of construction, renovation, operation, maintenance, and demolition and thereby mitigate the environment impacts of the construction activities and it is sustainable These measures encompass the hospitality sector that includes hotels and restaurants which has taken the hardest fall from the pandemic. The paper sought to examine building energy efficiency and materials and design employed in public places, green buildings to achieve constructive sustainability and to establish the benefits of utilizing energy efficiency, green materials and sustainable design; to document diverse policy interventions, design and Spatial dimensions of tourism and hospitality sectors; to examine changes in the hospitality, aviation sector especially from a design perspective regarding infrastructure or operational constraints and additional risk-mitigation measures; to dilate on the nature of implications for interior designers and architects to design public places to facilitate sustainable tourism and hospitality while balancing convenient space and their operations' natural surroundings. The qualitative research approach was adopted for the study. The researcher collected and analyzed data in continuous iteration. Secondary data was collected from articles in journals, trade publications, government reports, newspaper/ magazine articles, policy documents etc. In depth interviews were conducted with diverse stakeholders. Preliminary data indicates that designers have started imagining public places of tourism and hospitality against the backdrop of the government push and WHO guidelines. For instance, with regard to health, safety, hygiene and sanitation, Emirates, the Dubai-based airline has augmented health measures at the Dubai International Airport and on board its aircraft. It has leveraged high tech/ Nano-tech, social distancing to encourage least human contact, flexible design layouts to limit the occupancy. The researcher organized the data into thematic categories and found that the Government of Dubai has initiated comprehensive measures in the hospitality, tourism and aviation sectors in compliance with the WHO guidelines.Keywords: Covid 19, design, Dubai, hospitality, public spaces, tourism
Procedia PDF Downloads 16925116 Evaluating Aquaculture Farmers Responses to Climate Change and Sustainable Practices in Kenya
Authors: Olalekan Adekola, Margaret Gatonye, Paul Orina
Abstract:
The growing demand for farmed fish by underdeveloped and developing countries as a means of contributing positively towards eradication of hunger, food insecurity, and malnutrition for their fast growing populations has implications to the environment. Likewise, climate change poses both an immediate and future threat to local fish production with capture fisheries already experiencing a global decline. This not only raises fundamental questions concerning how aquaculture practices affect the environment, but also how ready are aquaculture farmers to adapt to climate related hazards. This paper assesses existing aquaculture practices and approaches to adapting to climate hazards in Kenya, where aquaculture has grown rapidly since the year 2009. The growth has seen rise in aquaculture set ups mainly along rivers and streams, importation of seed and feed and intensification with possible environmental implications. The aquaculture value chain in the context of climate change and their implication for practice is further investigated, and the strategies necessary for an improved implementation of resilient aquaculture system in Kenya is examined. Data for the study are collected from interviews, questionnaires, two workshops and document analysis. Despite acclaimed nutritional benefit of fish consumption in Kenya, poor management of effluents enriched with nitrogen, phosphorus, organic matter, and suspended solids has implications not just on the ecosystem, goods, and services, but is also potential source of resource-use conflicts especially in downstream communities and operators in the livestock, horticulture, and industrial sectors. The study concluded that aquaculture focuses on future orientation, climate resilient infrastructure, appropriate site selection and invest on biosafety as the key sustainable strategies against climate hazards.Keywords: aquaculture, resilience, environment, strategies, Kenya
Procedia PDF Downloads 16925115 A Non-parametric Clustering Approach for Multivariate Geostatistical Data
Authors: Francky Fouedjio
Abstract:
Multivariate geostatistical data have become omnipresent in the geosciences and pose substantial analysis challenges. One of them is the grouping of data locations into spatially contiguous clusters so that data locations within the same cluster are more similar while clusters are different from each other, in some sense. Spatially contiguous clusters can significantly improve the interpretation that turns the resulting clusters into meaningful geographical subregions. In this paper, we develop an agglomerative hierarchical clustering approach that takes into account the spatial dependency between observations. It relies on a dissimilarity matrix built from a non-parametric kernel estimator of the spatial dependence structure of data. It integrates existing methods to find the optimal cluster number and to evaluate the contribution of variables to the clustering. The capability of the proposed approach to provide spatially compact, connected and meaningful clusters is assessed using bivariate synthetic dataset and multivariate geochemical dataset. The proposed clustering method gives satisfactory results compared to other similar geostatistical clustering methods.Keywords: clustering, geostatistics, multivariate data, non-parametric
Procedia PDF Downloads 48125114 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records
Authors: Sara ElElimy, Samir Moustafa
Abstract:
Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).Keywords: big data analytics, machine learning, CDRs, 5G
Procedia PDF Downloads 14325113 A Data Mining Approach for Analysing and Predicting the Bank's Asset Liability Management Based on Basel III Norms
Authors: Nidhin Dani Abraham, T. K. Sri Shilpa
Abstract:
Asset liability management is an important aspect in banking business. Moreover, the today’s banking is based on BASEL III which strictly regulates on the counterparty default. This paper focuses on prediction and analysis of counter party default risk, which is a type of risk occurs when the customers fail to repay the amount back to the lender (bank or any financial institutions). This paper proposes an approach to reduce the counterparty risk occurring in the financial institutions using an appropriate data mining technique and thus predicts the occurrence of NPA. It also helps in asset building and restructuring quality. Liability management is very important to carry out banking business. To know and analyze the depth of liability of bank, a suitable technique is required. For that a data mining technique is being used to predict the dormant behaviour of various deposit bank customers. Various models are implemented and the results are analyzed of saving bank deposit customers. All these data are cleaned using data cleansing approach from the bank data warehouse.Keywords: data mining, asset liability management, BASEL III, banking
Procedia PDF Downloads 56225112 Parallel Coordinates on a Spiral Surface for Visualizing High-Dimensional Data
Authors: Chris Suma, Yingcai Xiao
Abstract:
This paper presents Parallel Coordinates on a Spiral Surface (PCoSS), a parallel coordinate based interactive visualization method for high-dimensional data, and a test implementation of the method. Plots generated by the test system are compared with those generated by XDAT, a software implementing traditional parallel coordinates. Traditional parallel coordinate plots can be cluttered when the number of data points is large or when the dimensionality of the data is high. PCoSS plots display multivariate data on a 3D spiral surface and allow users to see the whole picture of high-dimensional data with less cluttering. Taking advantage of the 3D display environment in PCoSS, users can further reduce cluttering by zooming into an axis of interest for a closer view or by moving vantage points and by reorienting the viewing angle to obtain a desired view of the plots.Keywords: human computer interaction, parallel coordinates, spiral surface, visualization
Procedia PDF Downloads 2225111 A Dynamic Ensemble Learning Approach for Online Anomaly Detection in Alibaba Datacenters
Authors: Wanyi Zhu, Xia Ming, Huafeng Wang, Junda Chen, Lu Liu, Jiangwei Jiang, Guohua Liu
Abstract:
Anomaly detection is a first and imperative step needed to respond to unexpected problems and to assure high performance and security in large data center management. This paper presents an online anomaly detection system through an innovative approach of ensemble machine learning and adaptive differentiation algorithms, and applies them to performance data collected from a continuous monitoring system for multi-tier web applications running in Alibaba data centers. We evaluate the effectiveness and efficiency of this algorithm with production traffic data and compare with the traditional anomaly detection approaches such as a static threshold and other deviation-based detection techniques. The experiment results show that our algorithm correctly identifies the unexpected performance variances of any running application, with an acceptable false positive rate. This proposed approach has already been deployed in real-time production environments to enhance the efficiency and stability in daily data center operations.Keywords: Alibaba data centers, anomaly detection, big data computation, dynamic ensemble learning
Procedia PDF Downloads 20525110 Unsupervised Text Mining Approach to Early Warning System
Authors: Ichihan Tai, Bill Olson, Paul Blessner
Abstract:
Traditional early warning systems that alarm against crisis are generally based on structured or numerical data; therefore, a system that can make predictions based on unstructured textual data, an uncorrelated data source, is a great complement to the traditional early warning systems. The Chicago Board Options Exchange (CBOE) Volatility Index (VIX), commonly referred to as the fear index, measures the cost of insurance against market crash, and spikes in the event of crisis. In this study, news data is consumed for prediction of whether there will be a market-wide crisis by predicting the movement of the fear index, and the historical references to similar events are presented in an unsupervised manner. Topic modeling-based prediction and representation are made based on daily news data between 1990 and 2015 from The Wall Street Journal against VIX index data from CBOE.Keywords: early warning system, knowledge management, market prediction, topic modeling.
Procedia PDF Downloads 343