Search results for: multivariate failure-time data
24361 A Named Data Networking Stack for Contiki-NG-OS
Authors: Sedat Bilgili, Alper K. Demir
Abstract:
The current Internet has become the dominant use with continuing growth in the home, medical, health, smart cities and industrial automation applications. Internet of Things (IoT) is an emerging technology to enable such applications in our lives. Moreover, Named Data Networking (NDN) is also emerging as a Future Internet architecture where it fits the communication needs of IoT networks. The aim of this study is to provide an NDN protocol stack implementation running on the Contiki operating system (OS). Contiki OS is an OS that is developed for constrained IoT devices. In this study, an NDN protocol stack that can work on top of IEEE 802.15.4 link and physical layers have been developed and presented.Keywords: internet of things (IoT), named-data, named data networking (NDN), operating system
Procedia PDF Downloads 17124360 Corporate Governance and Audit Report Lag: The Case of Tunisian Listed Companies
Authors: Lajmi Azhaar, Yab Mdallelah
Abstract:
This study examines the Tunisian market in which recent events, notably financial scandals, provide an appropriate framework for studying the impact of corporate governance on the audit report lag. Moreover, very little research has been done to examine this relationship in this context. The objective of this work is, therefore, to understand the factors influencing audit report lag, drawing primarily on agency theory (Jensen and Meckling, 1976), which shows that the characteristics of the board of directors have an impact on the report lag (independence, diligence, and size). In addition, the characteristics of the committee also have an impact on the audit report lag (size, independence, diligence, and expertise). Therefore, our research provides empirical evidence on the impact of governance mechanisms attributes on audit report lag. Using a sample of forty-seven (47) Tunisian companies listed on the Tunis Stock Exchange (BVMT) during the period from 2014 to 2019, and basing on the GMM method of the dynamic panel, multivariate analysis shows that most corporate governance attributes have a significant effect on audit report lag. Specifically, the audit committee diligence and the audit committee expertise have a significant and positive effect on audit report lag. But the diligence of the board has a significant and negative effect on audit report lag. However, this study finds no evidence that the audit committee independence, the size, independence, and diligence of the director’s board are associated with the audit report lag. In addition, the results of this study also show that there is a significant effect of some control variables. Finally, we are contributing to this study by using the GMM method of the dynamic panel. We are also using an emerging context that is very poorly developed and exploited by previous studies.Keywords: governance mechanisms, audit committee, board of directors, audit report lag
Procedia PDF Downloads 17424359 The Impact of the Board of Directors’ Characteristics on Tax Aggressiveness in USA Companies
Authors: jihen ayadi sellami
Abstract:
The rapid evolution of the global financial landscape has led to increased attention to corporate tax policies and the need to understand the factors that influence their tax behavior. In order to mitigate any residual loss for shareholders resulting from tax aggressiveness and resolve the agency problem, appropriate systems that separate the function of management from that of controlling are needed. In this context of growing concerns to limit aggressive corporate taxation practices through governance, this study discusses. Its aims is to examine the influence of six key characteristics of the board of directors (board size, diligence, CEO duality, presence of audit committees, gender diversity and independence of directors), given a governance mechanism, on the tax decisions of non-financial corporations in the United State. In fact, using a sample of 90 non-financial US firms from S&P 500 over a period of 4 years going from 2014 to 2017, the results based on a multivariate linear regression highlight significant associations between these characteristics and corporate tax policy. Notably, larger board, gender diversity, diligence and increased director independence appear to play an important role in reducing aggressive taxation. While duality has a positive and significant correlation with tax aggressiveness, that can be explained by the fact that the manager did properly exploit his specific position within the company. These findings contribute to a deeper understanding of how board characteristics can influence corporate tax management, providing avenues for more effective corporate governance and more responsible tax decision-makingKeywords: tax aggressiveness, board of directors, board size, CEO duality, audit committees, gender diversity, director independence, diligence, corporate governance, united states
Procedia PDF Downloads 6124358 Healthcare Associated Infections in an Intensive Care Unit in Tunisia: Incidence and Risk Factors
Authors: Nabiha Bouafia, Asma Ben Cheikh, Asma Ammar, Olfa Ezzi, Mohamed Mahjoub, Khaoula Meddeb, Imed Chouchene, Hamadi Boussarsar, Mansour Njah
Abstract:
Background: Hospital acquired infections (HAI) cause significant morbidity, mortality, length of stay and hospital costs, especially in the intensive care unit (ICU), because of the debilitated immune systems of their patients and exposure to invasive devices. The aims of this study were to determine the rate and the risk factors of HAI in an ICU of a university hospital in Tunisia. Materials/Methods: A prospective study was conducted in the 8-bed adult medical ICU of a University Hospital (Sousse Tunisia) during 14 months from September 15th, 2015 to November 15th, 2016. Patients admitted for more than 48h were included. Their surveillance was stopped after the discharge from ICU or death. HAIs were defined according to standard Centers for Disease Control and Prevention criteria. Risk factors were analyzed by conditional stepwise logistic regression. The p-value of < 0.05 was considered significant. Results: During the study, 192 patients had admitted for more than 48 hours. Their mean age was 59.3± 18.20 years and 57.1% were male. Acute respiratory failure was the main reason of admission (72%). The mean SAPS II score calculated at admission was 32.5 ± 14 (range: 6 - 78). The exposure to the mechanical ventilation (MV) and the central venous catheter were observed in 169 (88 %) and 144 (75 %) patients, respectively. Seventy-three patients (38.02%) developed 94 HAIs. The incidence density of HAIs was 41.53 per 1000 patient day. Mortality rate in patients with HAIs was 65.8 %( n= 48). Regarding the type of infection, Ventilator Associated Pneumoniae (VAP) and central venous catheter Associated Infections (CVC AI) were the most frequent with Incidence density: 14.88/1000 days of MV for VAP and 20.02/1000 CVC days for CVC AI. There were 5 Peripheral Venous Catheter Associated Infections, 2 urinary tract infections, and 21 other HAIs. Gram-negative bacteria were the most common germs identified in HAIs: Multidrug resistant Acinetobacter Baumanii (45%) and Klebsiella pneumoniae (10.96%) were the most frequently isolated. Univariate analysis showed that transfer from another hospital department (p= 0.001), intubation (p < 10-4), tracheostomy (p < 10-4), age (p=0.028), grade of acute respiratory failure (p=0.01), duration of sedation (p < 10-4), number of CVC (p < 10-4), length of mechanical ventilation (p < 10-4) and length of stay (p < 10-4), were associated to high risk of HAIS in ICU. Multivariate analysis reveals that independent risk factors for HAIs are: transfer from another hospital department: OR=13.44, IC 95% [3.9, 44.2], p < 10-4, duration of sedation: OR= 1.18, IC 95% [1.049, 1.325], p=0.006, high number of CVC: OR=2.78, IC 95% [1.73, 4.487], p < 10-4, and length of stay in ICU: OR= 1.14, IC 95% [1.066,1.22], p < 10-4. Conclusion: Prevention of nosocomial infections in ICUs is a priority of health care systems all around the world. Yet, their control requires an understanding of epidemiological data collected in these units.Keywords: healthcare associated infections, incidence, intensive care unit, risk factors
Procedia PDF Downloads 36924357 Location Privacy Preservation of Vehicle Data In Internet of Vehicles
Authors: Ying Ying Liu, Austin Cooke, Parimala Thulasiraman
Abstract:
Internet of Things (IoT) has attracted a recent spark in research on Internet of Vehicles (IoV). In this paper, we focus on one research area in IoV: preserving location privacy of vehicle data. We discuss existing location privacy preserving techniques and provide a scheme for evaluating these techniques under IoV traffic condition. We propose a different strategy in applying Differential Privacy using k-d tree data structure to preserve location privacy and experiment on real world Gowalla data set. We show that our strategy produces differentially private data, good preservation of utility by achieving similar regression accuracy to the original dataset on an LSTM (Long Term Short Term Memory) neural network traffic predictor.Keywords: differential privacy, internet of things, internet of vehicles, location privacy, privacy preservation scheme
Procedia PDF Downloads 17924356 World Agricultural Commodities Prices Dynamics and Volatilities Impacts on Commodities Importation and Food Security in West African Economic and Monetary Union Countries
Authors: Baoubadi Atozou, Koffi Akakpo
Abstract:
Since the decade 2000, the use of foodstuffs such as corn, wheat, and soybeans in biofuel production has been growing sharply in the United States, Canada, and Europe. Thus, prices for these agricultural products are rising in the world market. These cereals are the most important source of calorific energy for West African Economic and Monetary Union (WAEMU) countries members’ population. These countries are highly dependent on imports of most of these products. Thereby, rising prices can have an important impact on import levels and consequently on food security in these countries. This study aims to analyze the interrelationship between the prices of these commodities and their volatilities, and their effects on imports of these agricultural products by each WAEMU ’country member. The Autoregressive Distributed Lag (ARDL) model, the GARCH Multivariate model, and the Granger Causality Test are used in this investigation. The results show that import levels are highly and significantly sensitive to price changes as well as their volatility. In the short term as well as in the long term, there is a significant relationship between the prices of these products. There is a positive relationship in general between price volatility. And these volatilities have negative effects on the level of imports. The market characteristics affect food security in these countries, especially access to food for vulnerable and low-income populations. The policies makers must adopt viable strategies to increase agricultural production and limit their dependence on imports.Keywords: price volatility, import of agricultural products, food safety, WAEMU
Procedia PDF Downloads 19124355 Contribution of Foraminifers in Biostratigraphy and Paleoecology Interpretations of the Basal Eocene from the Phosphatic Sra Ouertaine Basin, in the Southern Tethys(Tunisia)
Authors: Oum Elkhir Mahmoudi, Nebiha Ben Haj Ali
Abstract:
Micropaleontological, sedimentological and statistical studies were carried out on the late Paleocene-early Eocene succession of Sra Ouertaine and Dyr El Kef in Northern open phosphatic Basin of Tunisia. Based on the abundance and stratigraphic distribution of planktic foraminiferal species, five planktic zones have been recognized from the base to the top of the phosphatic layers. The El Acarinina sibaiyaensis Zone, the E2 Pseudohastigerina wilcoxensis Zone, the E3 Morozovella marginodentata Zone, the E4 Morozovella formosa Zones and the E5 Morozovella subbotinae Zone. The placement of Paleocene-Eocene boundary (PETM) is just below the base of the phosphatic interval. The ETM-2 event may be detectable in the analyzed biotic record of Sra Ouertaine. Based on benthic assemblages, abundances, cluster and multivariate statistical analyses, two biofacies were recognized for each section. The recognized ecozones are typical of warm and shallow water inner neritic setting (dominance of epifaunal fauna Anomalinoides, Dentalina and Cibicidoides associated with Frondicularia phosphatica, Trochamminoides globigeriniformis and Eponides elevatus). The paleoenvironment is eutrophic (presence of several bolivinitids and verneuilinids). For the Dyr El Kef section and P5 and E2 of Sra Ouertaine section, our records indicate that paleoenvironment is influenced by coastal upwelling without oxygen-deficiency, the paleodepth is estimated to be around 50 m. The paleoecosystem is diversified and balanced with a general tendency to stressed condition. While the upper part of Sra Ouertaine section is more eutrophic, influenced by coastal upwelling with oxygen-deficiency, the paleodepth is estimated to be less than 50 m and the ecosystem is unsettled.Keywords: Tunisia, Sra ouertaine Dyr el kef, early Eocene, foraminifera, chronostratigraphy, paleoecology, paleoenvironment
Procedia PDF Downloads 4724354 Investigating Data Normalization Techniques in Swarm Intelligence Forecasting for Energy Commodity Spot Price
Authors: Yuhanis Yusof, Zuriani Mustaffa, Siti Sakira Kamaruddin
Abstract:
Data mining is a fundamental technique in identifying patterns from large data sets. The extracted facts and patterns contribute in various domains such as marketing, forecasting, and medical. Prior to that, data are consolidated so that the resulting mining process may be more efficient. This study investigates the effect of different data normalization techniques, which are Min-max, Z-score, and decimal scaling, on Swarm-based forecasting models. Recent swarm intelligence algorithms employed includes the Grey Wolf Optimizer (GWO) and Artificial Bee Colony (ABC). Forecasting models are later developed to predict the daily spot price of crude oil and gasoline. Results showed that GWO works better with Z-score normalization technique while ABC produces better accuracy with the Min-Max. Nevertheless, the GWO is more superior that ABC as its model generates the highest accuracy for both crude oil and gasoline price. Such a result indicates that GWO is a promising competitor in the family of swarm intelligence algorithms.Keywords: artificial bee colony, data normalization, forecasting, Grey Wolf optimizer
Procedia PDF Downloads 47624353 Collision Theory Based Sentiment Detection Using Discourse Analysis in Hadoop
Authors: Anuta Mukherjee, Saswati Mukherjee
Abstract:
Data is growing everyday. Social networking sites such as Twitter are becoming an integral part of our daily lives, contributing a large increase in the growth of data. It is a rich source especially for sentiment detection or mining since people often express honest opinion through tweets. However, although sentiment analysis is a well-researched topic in text, this analysis using Twitter data poses additional challenges since these are unstructured data with abbreviations and without a strict grammatical correctness. We have employed collision theory to achieve sentiment analysis in Twitter data. We have also incorporated discourse analysis in the collision theory based model to detect accurate sentiment from tweets. We have also used the retweet field to assign weights to certain tweets and obtained the overall weightage of a topic provided in the form of a query. Hadoop has been exploited for speed. Our experiments show effective results.Keywords: sentiment analysis, twitter, collision theory, discourse analysis
Procedia PDF Downloads 53524352 Advances in Mathematical Sciences: Unveiling the Power of Data Analytics
Authors: Zahid Ullah, Atlas Khan
Abstract:
The rapid advancements in data collection, storage, and processing capabilities have led to an explosion of data in various domains. In this era of big data, mathematical sciences play a crucial role in uncovering valuable insights and driving informed decision-making through data analytics. The purpose of this abstract is to present the latest advances in mathematical sciences and their application in harnessing the power of data analytics. This abstract highlights the interdisciplinary nature of data analytics, showcasing how mathematics intersects with statistics, computer science, and other related fields to develop cutting-edge methodologies. It explores key mathematical techniques such as optimization, mathematical modeling, network analysis, and computational algorithms that underpin effective data analysis and interpretation. The abstract emphasizes the role of mathematical sciences in addressing real-world challenges across different sectors, including finance, healthcare, engineering, social sciences, and beyond. It showcases how mathematical models and statistical methods extract meaningful insights from complex datasets, facilitating evidence-based decision-making and driving innovation. Furthermore, the abstract emphasizes the importance of collaboration and knowledge exchange among researchers, practitioners, and industry professionals. It recognizes the value of interdisciplinary collaborations and the need to bridge the gap between academia and industry to ensure the practical application of mathematical advancements in data analytics. The abstract highlights the significance of ongoing research in mathematical sciences and its impact on data analytics. It emphasizes the need for continued exploration and innovation in mathematical methodologies to tackle emerging challenges in the era of big data and digital transformation. In summary, this abstract sheds light on the advances in mathematical sciences and their pivotal role in unveiling the power of data analytics. It calls for interdisciplinary collaboration, knowledge exchange, and ongoing research to further unlock the potential of mathematical methodologies in addressing complex problems and driving data-driven decision-making in various domains.Keywords: mathematical sciences, data analytics, advances, unveiling
Procedia PDF Downloads 9324351 A Formal Approach for Instructional Design Integrated with Data Visualization for Learning Analytics
Authors: Douglas A. Menezes, Isabel D. Nunes, Ulrich Schiel
Abstract:
Most Virtual Learning Environments do not provide support mechanisms for the integrated planning, construction and follow-up of Instructional Design supported by Learning Analytic results. The present work aims to present an authoring tool that will be responsible for constructing the structure of an Instructional Design (ID), without the data being altered during the execution of the course. The visual interface aims to present the critical situations present in this ID, serving as a support tool for the course follow-up and possible improvements, which can be made during its execution or in the planning of a new edition of this course. The model for the ID is based on High-Level Petri Nets and the visualization forms are determined by the specific kind of the data generated by an e-course, a population of students generating sequentially dependent data.Keywords: educational data visualization, high-level petri nets, instructional design, learning analytics
Procedia PDF Downloads 24324350 Analysis of Users’ Behavior on Book Loan Log Based on Association Rule Mining
Authors: Kanyarat Bussaban, Kunyanuth Kularbphettong
Abstract:
This research aims to create a model for analysis of student behavior using Library resources based on data mining technique in case of Suan Sunandha Rajabhat University. The model was created under association rules, apriori algorithm. The results were found 14 rules and the rules were tested with testing data set and it showed that the ability of classify data was 79.24 percent and the MSE was 22.91. The results showed that the user’s behavior model by using association rule technique can use to manage the library resources.Keywords: behavior, data mining technique, a priori algorithm, knowledge discovery
Procedia PDF Downloads 40424349 Exploration of RFID in Healthcare: A Data Mining Approach
Authors: Shilpa Balan
Abstract:
Radio Frequency Identification, also popularly known as RFID is used to automatically identify and track tags attached to items. This study focuses on the application of RFID in healthcare. The adoption of RFID in healthcare is a crucial technology to patient safety and inventory management. Data from RFID tags are used to identify the locations of patients and inventory in real time. Medical errors are thought to be a prominent cause of loss of life and injury. The major advantage of RFID application in healthcare industry is the reduction of medical errors. The healthcare industry has generated huge amounts of data. By discovering patterns and trends within the data, big data analytics can help improve patient care and lower healthcare costs. The number of increasing research publications leading to innovations in RFID applications shows the importance of this technology. This study explores the current state of research of RFID in healthcare using a text mining approach. No study has been performed yet on examining the current state of RFID research in healthcare using a data mining approach. In this study, related articles were collected on RFID from healthcare journal and news articles. Articles collected were from the year 2000 to 2015. Significant keywords on the topic of focus are identified and analyzed using open source data analytics software such as Rapid Miner. These analytical tools help extract pertinent information from massive volumes of data. It is seen that the main benefits of adopting RFID technology in healthcare include tracking medicines and equipment, upholding patient safety, and security improvement. The real-time tracking features of RFID allows for enhanced supply chain management. By productively using big data, healthcare organizations can gain significant benefits. Big data analytics in healthcare enables improved decisions by extracting insights from large volumes of data.Keywords: RFID, data mining, data analysis, healthcare
Procedia PDF Downloads 23324348 The Importance of Knowledge Innovation for External Audit on Anti-Corruption
Authors: Adel M. Qatawneh
Abstract:
This paper aimed to determine the importance of knowledge innovation for external audit on anti-corruption in the entire Jordanian bank companies are listed in Amman Stock Exchange (ASE). The study importance arises from the need to recognize the Knowledge innovation for external audit and anti-corruption as the development in the world of business, the variables that will be affected by external audit innovation are: reliability of financial data, relevantly of financial data, consistency of the financial data, Full disclosure of financial data and protecting the rights of investors to achieve the objectives of the study a questionnaire was designed and distributed to the society of the Jordanian bank are listed in Amman Stock Exchange. The data analysis found out that the banks in Jordan have a positive importance of Knowledge innovation for external audit on anti-corruption. They agree on the benefit of Knowledge innovation for external audit on anti-corruption. The statistical analysis showed that Knowledge innovation for external audit had a positive impact on the anti-corruption and that external audit has a significantly statistical relationship with anti-corruption, reliability of financial data, consistency of the financial data, a full disclosure of financial data and protecting the rights of investors.Keywords: knowledge innovation, external audit, anti-corruption, Amman Stock Exchange
Procedia PDF Downloads 46524347 Automated End-to-End Pipeline Processing Solution for Autonomous Driving
Authors: Ashish Kumar, Munesh Raghuraj Varma, Nisarg Joshi, Gujjula Vishwa Teja, Srikanth Sambi, Arpit Awasthi
Abstract:
Autonomous driving vehicles are revolutionizing the transportation system of the 21st century. This has been possible due to intensive research put into making a robust, reliable, and intelligent program that can perceive and understand its environment and make decisions based on the understanding. It is a very data-intensive task with data coming from multiple sensors and the amount of data directly reflects on the performance of the system. Researchers have to design the preprocessing pipeline for different datasets with different sensor orientations and alignments before the dataset can be fed to the model. This paper proposes a solution that provides a method to unify all the data from different sources into a uniform format using the intrinsic and extrinsic parameters of the sensor used to capture the data allowing the same pipeline to use data from multiple sources at a time. This also means easy adoption of new datasets or In-house generated datasets. The solution also automates the complete deep learning pipeline from preprocessing to post-processing for various tasks allowing researchers to design multiple custom end-to-end pipelines. Thus, the solution takes care of the input and output data handling, saving the time and effort spent on it and allowing more time for model improvement.Keywords: augmentation, autonomous driving, camera, custom end-to-end pipeline, data unification, lidar, post-processing, preprocessing
Procedia PDF Downloads 12324346 Visual Text Analytics Technologies for Real-Time Big Data: Chronological Evolution and Issues
Authors: Siti Azrina B. A. Aziz, Siti Hafizah A. Hamid
Abstract:
New approaches to analyze and visualize data stream in real-time basis is important in making a prompt decision by the decision maker. Financial market trading and surveillance, large-scale emergency response and crowd control are some example scenarios that require real-time analytic and data visualization. This situation has led to the development of techniques and tools that support humans in analyzing the source data. With the emergence of Big Data and social media, new techniques and tools are required in order to process the streaming data. Today, ranges of tools which implement some of these functionalities are available. In this paper, we present chronological evolution evaluation of technologies for supporting of real-time analytic and visualization of the data stream. Based on the past research papers published from 2002 to 2014, we gathered the general information, main techniques, challenges and open issues. The techniques for streaming text visualization are identified based on Text Visualization Browser in chronological order. This paper aims to review the evolution of streaming text visualization techniques and tools, as well as to discuss the problems and challenges for each of identified tools.Keywords: information visualization, visual analytics, text mining, visual text analytics tools, big data visualization
Procedia PDF Downloads 39924345 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks
Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz
Abstract:
Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.Keywords: customer relationship management, churn prediction, telecom industry, deep learning, artificial neural networks
Procedia PDF Downloads 14624344 The Face Sync-Smart Attendance
Authors: Bekkem Chakradhar Reddy, Y. Soni Priya, Mathivanan G., L. K. Joshila Grace, N. Srinivasan, Asha P.
Abstract:
Currently, there are a lot of problems related to marking attendance in schools, offices, or other places. Organizations tasked with collecting daily attendance data have numerous concerns. There are different ways to mark attendance. The most commonly used method is collecting data manually by calling each student. It is a longer process and problematic. Now, there are a lot of new technologies that help to mark attendance automatically. It reduces work and records the data. We have proposed to implement attendance marking using the latest technologies. We have implemented a system based on face identification and analyzing faces. The project is developed by gathering faces and analyzing data, using deep learning algorithms to recognize faces effectively. The data is recorded and forwarded to the host through mail. The project was implemented in Python and Python libraries used are CV2, Face Recognition, and Smtplib.Keywords: python, deep learning, face recognition, CV2, smtplib, Dlib.
Procedia PDF Downloads 5824343 Geographical Data Visualization Using Video Games Technologies
Authors: Nizar Karim Uribe-Orihuela, Fernando Brambila-Paz, Ivette Caldelas, Rodrigo Montufar-Chaveznava
Abstract:
In this paper, we present the advances corresponding to the implementation of a strategy to visualize geographical data using a Software Development Kit (SDK) for video games. We use multispectral images from Landsat 7 platform and Laser Imaging Detection and Ranging (LIDAR) data from The National Institute of Geography and Statistics of Mexican (INEGI). We select a place of interest to visualize from Landsat platform and make some processing to the image (rotations, atmospheric correction and enhancement). The resulting image will be our gray scale color-map to fusion with the LIDAR data, which was selected using the same coordinates than in Landsat. The LIDAR data is translated to 8-bit raw data. Both images are fused in a software developed using Unity (an SDK employed for video games). The resulting image is then displayed and can be explored moving around. The idea is the software could be used for students of geology and geophysics at the Engineering School of the National University of Mexico. They will download the software and images corresponding to a geological place of interest to a smartphone and could virtually visit and explore the site with a virtual reality visor such as Google cardboard.Keywords: virtual reality, interactive technologies, geographical data visualization, video games technologies, educational material
Procedia PDF Downloads 24624342 Nonparametric Sieve Estimation with Dependent Data: Application to Deep Neural Networks
Authors: Chad Brown
Abstract:
This paper establishes general conditions for the convergence rates of nonparametric sieve estimators with dependent data. We present two key results: one for nonstationary data and another for stationary mixing data. Previous theoretical results often lack practical applicability to deep neural networks (DNNs). Using these conditions, we derive convergence rates for DNN sieve estimators in nonparametric regression settings with both nonstationary and stationary mixing data. The DNN architectures considered adhere to current industry standards, featuring fully connected feedforward networks with rectified linear unit activation functions, unbounded weights, and a width and depth that grows with sample size.Keywords: sieve extremum estimates, nonparametric estimation, deep learning, neural networks, rectified linear unit, nonstationary processes
Procedia PDF Downloads 4124341 Development of Risk Management System for Urban Railroad Underground Structures and Surrounding Ground
Authors: Y. K. Park, B. K. Kim, J. W. Lee, S. J. Lee
Abstract:
To assess the risk of the underground structures and surrounding ground, we collect basic data by the engineering method of measurement, exploration and surveys and, derive the risk through proper analysis and each assessment for urban railroad underground structures and surrounding ground including station inflow. Basic data are obtained by the fiber-optic sensors, MEMS sensors, water quantity/quality sensors, tunnel scanner, ground penetrating radar, light weight deflectometer, and are evaluated if they are more than the proper value or not. Based on these data, we analyze the risk level of urban railroad underground structures and surrounding ground. And we develop the risk management system to manage efficiently these data and to support a convenient interface environment at input/output of data.Keywords: urban railroad, underground structures, ground subsidence, station inflow, risk
Procedia PDF Downloads 33624340 Integration of Big Data to Predict Transportation for Smart Cities
Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin
Abstract:
The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system. The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.Keywords: big data, machine learning, smart city, social cost, transportation network
Procedia PDF Downloads 26024339 Integrated Model for Enhancing Data Security Performance in Cloud Computing
Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali
Abstract:
Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.Keywords: cloud Ccomputing, data security, SAAS, PAAS, IAAS, Blowfish
Procedia PDF Downloads 47724338 Assessing the Impact of Physical Inactivity on Dialysis Adequacy and Functional Health in Peritoneal Dialysis Patients
Authors: Mohammad Ali Tabibi, Farzad Nazemi, Nasrin Salimian
Abstract:
Background: Peritoneal dialysis (PD) is a prevalent renal replacement therapy for patients with end-stage renal disease. Despite its benefits, PD patients often experience reduced physical activity and physical function, which can negatively impact dialysis adequacy and overall health outcomes. Despite the known benefits of maintaining physical activity in chronic disease management, the specific interplay between physical inactivity, physical function, and dialysis adequacy in PD patients remains underexplored. Understanding this relationship is essential for developing targeted interventions to enhance patient care and outcomes in this vulnerable population. This study aims to assess the impact of physical inactivity on dialysis adequacy and functional health in PD patients. Methods: This cross-sectional study included 135 peritoneal dialysis patients from multiple dialysis centers. Physical inactivity was measured using the International Physical Activity Questionnaire (IPAQ), while physical function was assessed using the Short Physical Performance Battery (SPPB). Dialysis adequacy was evaluated using the Kt/V ratio. Additional variables such as demographic data, comorbidities, and laboratory parameters were collected to control for potential confounders. Statistical analyses were performed to determine the relationships between physical inactivity, physical function, and dialysis adequacy. Results: The study cohort comprised 70 males and 65 females with a mean age of 55.4 ± 13.2 years. A significant proportion of the patients (65%) were categorized as physically inactive based on IPAQ scores. Inactive patients demonstrated significantly lower SPPB scores (mean 6.2 ± 2.1) compared to their more active counterparts (mean 8.5 ± 1.8, p < 0.001). Dialysis adequacy, as measured by Kt/V, was found to be suboptimal (Kt/V < 1.7) in 48% of the patients. There was a significant positive correlation between physical function scores and Kt/V values (r = 0.45, p < 0.01), indicating that better physical function is associated with higher dialysis adequacy. Also, there was a significant negative correlation between physical inactivity and physical function (r = -0.55, p < 0.01). Additionally, physically inactive patients had lower Kt/V ratios compared to their active counterparts (1.3 ± 0.3 vs. 1.8 ± 0.4, p < 0.05). Multivariate regression analysis revealed that physical inactivity was an independent predictor of reduced dialysis adequacy (β = -0.32, p < 0.01) and poorer physical function (β = -0.41, p < 0.01) after adjusting for age, sex, comorbidities, and dialysis vintage. Conclusion: This study underscores the critical role of physical activity and physical function in maintaining adequate dialysis in peritoneal dialysis patients. These findings highlight the need for targeted interventions to promote physical activity in this population to improve their overall health outcomes. Future research should focus on developing and evaluating exercise programs tailored for PD patients to enhance their physical function and dialysis adequacy. The findings suggest that interventions aimed at increasing physical activity and improving physical function may enhance dialysis adequacy and overall health outcomes in this population. Further research is warranted to explore the mechanisms underlying these associations and to develop targeted strategies for enhancing patient care.Keywords: inactivity, physical function, peritoneal dialysis, dialysis adequacy
Procedia PDF Downloads 3524337 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks
Authors: Wang Yichen, Haruka Yamashita
Abstract:
In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.Keywords: recurrent neural network, players lineup, basketball data, decision making model
Procedia PDF Downloads 13324336 Impact of International Student Mobility on European and Global Identity: A Case Study of Switzerland
Authors: Karina Oborune
Abstract:
International student mobility involves a unique spatio-temporal context and exploring the various aspects of mobile students’ experience can lead to new findings within identity studies. The previous studies have mainly focused on student mobility within Europe and its impact on European identity arguing that students who participate in intra-European mobility already feel European before exchange. Contrary to previous studies, in this paper student mobility is analyzed from different point of view. In order to see whether a true Europeanization of identities is taking place, it is necessary to contrast European identity with alternative supranational identity which could similarly result from student mobility and in particular a global identity. Besides, in the paper there is explored whether geographical constellation (host country continental location during mobility- Europe vs. outside of Europe) plays a role. Based on newly developed model of multicultural, social and socio-demographic variables there is argued that after intra-European mobility only global identity of students could be increased (H1), but the mobility to countries outside of Europe causes changes in European identity (H2). The quantitative study (survey, n=1440, 22 higher education institutions, experimental group of former and future/potential mobile students and control group of non-mobile students) was held in Switzerland where is equally high number of students who participate in intra-European and outside of Europe mobility. The results of multivariate linear regression showed that students who participate in exchange in Europe increase their European identity due to having close friends from Europe, as well as due to length of the mobility experience had impact, but students who participate in exchange outside of Europe increase their global identity due to having close friends from outside of Europe and proficiency in foreign languages.Keywords: student mobility, European identity, global identity, global identity
Procedia PDF Downloads 73124335 Challenges in Multi-Cloud Storage Systems for Mobile Devices
Authors: Rajeev Kumar Bedi, Jaswinder Singh, Sunil Kumar Gupta
Abstract:
The demand for cloud storage is increasing because users want continuous access their data. Cloud Storage revolutionized the way how users access their data. A lot of cloud storage service providers are available as DropBox, G Drive, and providing limited free storage and for extra storage; users have to pay money, which will act as a burden on users. To avoid the issue of limited free storage, the concept of Multi Cloud Storage introduced. In this paper, we will discuss the limitations of existing Multi Cloud Storage systems for mobile devices.Keywords: cloud storage, data privacy, data security, multi cloud storage, mobile devices
Procedia PDF Downloads 69924334 Spatial and Geostatistical Analysis of Surficial Soils of the Contiguous United States
Authors: Rachel Hetherington, Chad Deering, Ann Maclean, Snehamoy Chatterjee
Abstract:
The U.S. Geological Survey conducted a soil survey and subsequent mineralogical and geochemical analyses of over 4800 samples taken across the contiguous United States between the years 2007 and 2013. At each location, samples were taken from the top 5 cm, the A-horizon, and the C-horizon. Many studies have looked at the correlation between the mineralogical and geochemical content of soils and influencing factors such as parent lithology, climate, soil type, and age, but it seems little has been done in relation to quantifying and assessing the correlation between elements in the soil on a national scale. GIS was used for the mapping and multivariate interpolation of over 40 major and trace elements for surficial soils (0-5 cm depth). Qualitative analysis of the spatial distribution across the U.S. shows distinct patterns amongst elements both within the same periodic groups and within different periodic groups, and therefore with different behavioural characteristics. Results show the emergence of 4 main patterns of high concentration areas: vertically along the west coast, a C-shape formed through the states around Utah and northern Arizona, a V-shape through the Midwest and connecting to the Appalachians, and along the Appalachians. The Band Collection Statistics tool in GIS was used to quantitatively analyse the geochemical raster datasets and calculate a correlation matrix. Patterns emerged, which were not identified in qualitative analysis, many of which are also amongst elements with very different characteristics. Preliminary results show 41 element pairings with a strong positive correlation ( ≥ 0.75). Both qualitative and quantitative analyses on this scale could increase knowledge on the relationships between element distribution and behaviour in surficial soils of the U.S.Keywords: correlation matrix, geochemical analyses, spatial distribution of elements, surficial soils
Procedia PDF Downloads 12624333 Talent Management through Integration of Talent Value Chain and Human Capital Analytics Approaches
Authors: Wuttigrai Ngamsirijit
Abstract:
Talent management in today’s modern organizations has become data-driven due to a demand for objective human resource decision making and development of analytics technologies. HR managers have been faced with some obstacles in exploiting data and information to obtain their effective talent management decisions. These include process-based data and records; insufficient human capital-related measures and metrics; lack of capabilities in data modeling in strategic manners; and, time consuming to add up numbers and make decisions. This paper proposes a framework of talent management through integration of talent value chain and human capital analytics approaches. It encompasses key data, measures, and metrics regarding strategic talent management decisions along the organizational and talent value chain. Moreover, specific predictive and prescriptive models incorporating these data and information are recommended to help managers in understanding the state of talent, gaps in managing talent and the organization, and the ways to develop optimized talent strategies.Keywords: decision making, human capital analytics, talent management, talent value chain
Procedia PDF Downloads 18724332 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem
Authors: Ouafa Amira, Jiangshe Zhang
Abstract:
Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.Keywords: clustering, fuzzy c-means, regularization, relative entropy
Procedia PDF Downloads 259