Search results for: traffic data
23528 A Study on Using Network Coding for Packet Transmissions in Wireless Sensor Networks
Authors: Rei-Heng Cheng, Wen-Pinn Fang
Abstract:
A wireless sensor network (WSN) is composed by a large number of sensors and one or a few base stations, where the sensor is responsible for detecting specific event information, which is sent back to the base station(s). However, how to save electricity consumption to extend the network lifetime is a problem that cannot be ignored in the wireless sensor networks. Since the sensor network is used to monitor a region or specific events, how the information can be reliably sent back to the base station is surly important. Network coding technique is often used to enhance the reliability of the network transmission. When a node needs to send out M data packets, it encodes these data with redundant data and sends out totally M + R packets. If the receiver can get any M packets out from these M + R packets, it can decode and get the original M data packets. To transmit redundant packets will certainly result in the excess energy consumption. This paper will explore relationship between the quality of wireless transmission and the number of redundant packets. Hopefully, each sensor can overhear the nearby transmissions, learn the wireless transmission quality around it, and dynamically determine the number of redundant packets used in network coding.Keywords: energy consumption, network coding, transmission reliability, wireless sensor networks
Procedia PDF Downloads 39123527 Designing and Implementing a Tourist-Guide Web Service Based on Volunteer Geographic Information Using Open-Source Technologies
Authors: Javad Sadidi, Ehsan Babaei, Hani Rezayan
Abstract:
The advent of web 2.0 gives a possibility to scale down the costs of data collection and mapping, specifically if the process is done by volunteers. Every volunteer can be thought of as a free and ubiquitous sensor to collect spatial, descriptive as well as multimedia data for tourist services. The lack of large-scale information, such as real-time climate and weather conditions, population density, and other related data, can be considered one of the important challenges in developing countries for tourists to make the best decision in terms of time and place of travel. The current research aims to design and implement a spatiotemporal web map service using volunteer-submitted data. The service acts as a tourist-guide service in which tourists can search interested places based on their requested time for travel. To design the service, three tiers of architecture, including data, logical processing, and presentation tiers, have been utilized. For implementing the service, open-source software programs, client and server-side programming languages (such as OpenLayers2, AJAX, and PHP), Geoserver as a map server, and Web Feature Service (WFS) standards have been used. The result is two distinct browser-based services, one for sending spatial, descriptive, and multimedia volunteer data and another one for tourists and local officials. Local official confirms the veracity of the volunteer-submitted information. In the tourist interface, a spatiotemporal search engine has been designed to enable tourists to find a tourist place based on province, city, and location at a specific time of interest. Implementing the tourist-guide service by this methodology causes the following: the current tourists participate in a free data collection and sharing process for future tourists, a real-time data sharing and accessing for all, avoiding a blind selection of travel destination and significantly, decreases the cost of providing such services.Keywords: VGI, tourism, spatiotemporal, browser-based, web mapping
Procedia PDF Downloads 9823526 Effect of Bank Specific and Macro Economic Factors on Credit Risk of Islamic Banks in Pakistan
Authors: Mati Ullah, Shams Ur Rahman
Abstract:
The purpose of this research study is to investigate the effect of macroeconomic and bank-specific factors on credit risk in Islamic banking in Pakistan. The future of financial institutions largely depends on how well they manage risks. Credit risk is an important type of risk affecting the banking sector. The current study has taken quarterly data for the period of 6 years, from 1st July 2014 to 30 Jun 2020. The data set consisted of secondary data. Data was extracted from the websites of the State Bank and World Bank and from the financial statements of the concerned banks. In this study, the Ordinary least square model was used for the analysis of the data. The results supported the hypothesis that macroeconomic factors and bank-specific factors have a significant effect on credit risk. Macroeconomic variables, Inflation and exchange rates have positive significant effects on credit risk. However, gross domestic product has a negative significant relationship with credit risk. Moreover, the corporate rate has no significant relation with credit risk. Internal variables, size, management efficiency, net profit share income and capital adequacy have been proven to influence positively and significantly the credit risk. However, loan to deposit-has a negative insignificance relationship with credit risk. The contribution of this article is that similar conclusions have been made regarding the influence of banking factors on credit risk.Keywords: credit risk, Islamic banks, macroeconomic variables, banks specific variable
Procedia PDF Downloads 1923525 Non-Parametric Regression over Its Parametric Couterparts with Large Sample Size
Authors: Jude Opara, Esemokumo Perewarebo Akpos
Abstract:
This paper is on non-parametric linear regression over its parametric counterparts with large sample size. Data set on anthropometric measurement of primary school pupils was taken for the analysis. The study used 50 randomly selected pupils for the study. The set of data was subjected to normality test, and it was discovered that the residuals are not normally distributed (i.e. they do not follow a Gaussian distribution) for the commonly used least squares regression method for fitting an equation into a set of (x,y)-data points using the Anderson-Darling technique. The algorithms for the nonparametric Theil’s regression are stated in this paper as well as its parametric OLS counterpart. The use of a programming language software known as “R Development” was used in this paper. From the analysis, the result showed that there exists a significant relationship between the response and the explanatory variable for both the parametric and non-parametric regression. To know the efficiency of one method over the other, the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) are used, and it is discovered that the nonparametric regression performs better than its parametric regression counterparts due to their lower values in both the AIC and BIC. The study however recommends that future researchers should study a similar work by examining the presence of outliers in the data set, and probably expunge it if detected and re-analyze to compare results.Keywords: Theil’s regression, Bayesian information criterion, Akaike information criterion, OLS
Procedia PDF Downloads 30523524 Improving the Performance of Requisition Document Online System for Royal Thai Army by Using Time Series Model
Authors: D. Prangchumpol
Abstract:
This research presents a forecasting method of requisition document demands for Military units by using Exponential Smoothing methods to analyze data. The data used in the forecast is an actual data requisition document of The Adjutant General Department. The results of the forecasting model to forecast the requisition of the document found that Holt–Winters’ trend and seasonality method of α=0.1, β=0, γ=0 is appropriate and matches for requisition of documents. In addition, the researcher has developed a requisition online system to improve the performance of requisition documents of The Adjutant General Department, and also ensuring that the operation can be checked.Keywords: requisition, holt–winters, time series, royal thai army
Procedia PDF Downloads 30823523 Study and Analysis of Permeable Articulated Concrete Blocks Pavement: With Reference to Indian Context
Authors: Shrikant Charhate, Gayatri Deshpande
Abstract:
Permeable pavements have significant benefits like managing runoff, infiltration, and carrying traffic over conventional pavements in terms of sustainability and environmental impact. Some of the countries are using this technique, especially at locations where durability and other parameters are of importance in nature; however, sparse work has been done on this concept. In India, this is yet to be adopted. In this work, the progress in the characterization and development of Permeable Articulated Concrete Blocks (PACB) pavement design is described and discussed with reference to Indian conditions. The experimentation and in-depth analysis was carried out considering conditions like soil erosion, water logging, and dust which are significant challenges caused due to impermeability of pavement. Concrete blocks with size 16.5’’x 6.5’’x 7’’ consisting of arch shape (4’’) at beneath and ½” PVC holes for articulation were casted. These blocks were tested for flexural strength. The articulation process was done with nylon ropes forming series of concrete block system. The total spacing between the blocks was kept about 8 to 10% of total area. The hydraulic testing was carried out by placing the articulated blocks with the combination of layers of soil, geotextile, clean angular aggregate. This was done to see the percentage of seepage through the entire system. The experimental results showed that with the shape of concrete block the flexural strength achieved was beyond the permissible limit. Such blocks with the combination could be very useful innovation in Indian conditions and useful at various locations compared to the traditional blocks as an alternative for long term sustainability.Keywords: connections, geotextile, permeable ACB, pavements, stone base
Procedia PDF Downloads 28623522 Geoelectric Survey for Groundwater Potential in Waziri Umaru Federal Polytechnic, Birnin Kebbi, Nigeria
Authors: Ibrahim Mohammed, Suleiman Taofiq, Muhammad Naziru Yahya
Abstract:
Geoelectrical measurements using Schlumberger Vertical Electrical Sounding (VES) method were carried out in Waziri Umaru Federal Polytechnic, Birnin Kebbi, Nigeria, with the aim of determining the groundwater potential in the area. Twelve (12) Vertical Electric Sounding (VES) data were collected using Terrameter (ABEM SAS 300c) and analyzed using computer software (IPI2win), which gives an automatic interpretation of the apparent resistivity. The results of the interpretation of VES data were used in the characterization of three to five geo-electric layers from which the aquifer units were delineated. Data analysis indicated that water bearing formation exists in the third and fourth layers having resistivity range of 312 to 767 Ωm and 9.51 to 681 Ωm, respectively. The thickness of the formation ranges from 14.7 to 41.8 m, while the depth is from 8.22 to 53.7 m. Based on the result obtained from the interpretation of the data, five (5) VES stations were recommended as the most viable locations for groundwater exploration in the study area. The VES stations include VES A4, A5, A6, B1, and B2. The VES results of the entire area indicated that the water bearing formation occurs at maximum depth of 53.7 m at the time of this survey.Keywords: aquifer, depth, groundwater, resistivity, Schlumberger
Procedia PDF Downloads 16623521 The Integration of Patient Health Record Generated from Wearable and Internet of Things Devices into Health Information Exchanges
Authors: Dalvin D. Hill, Hector M. Castro Garcia
Abstract:
A growing number of individuals utilize wearable devices on a daily basis. The usage and functionality of these wearable devices vary from user to user. One popular usage of said devices is to track health-related activities that are typically stored on a device’s memory or uploaded to an account in the cloud; based on the current trend, the data accumulated from the wearable device are stored in a standalone location. In many of these cases, this health related datum is not a factor when considering the holistic view of a user’s health lifestyle or record. This health-related data generated from wearable and Internet of Things (IoT) devices can serve as empirical information to a medical provider, as the standalone data can add value to the holistic health record of a patient. This paper proposes a solution to incorporate the data gathered from these wearable and IoT devices, with that a patient’s Personal Health Record (PHR) stored within the confines of a Health Information Exchange (HIE).Keywords: electronic health record, health information exchanges, internet of things, personal health records, wearable devices, wearables
Procedia PDF Downloads 12823520 System Identification in Presence of Outliers
Authors: Chao Yu, Qing-Guo Wang, Dan Zhang
Abstract:
The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising
Procedia PDF Downloads 30723519 Defining a Reference Architecture for Predictive Maintenance Systems: A Case Study Using the Microsoft Azure IoT-Cloud Components
Authors: Walter Bernhofer, Peter Haber, Tobias Mayer, Manfred Mayr, Markus Ziegler
Abstract:
Current preventive maintenance measures are cost intensive and not efficient. With the available sensor data of state of the art internet of things devices new possibilities of automated data processing emerge. Current advances in data science and in machine learning enable new, so called predictive maintenance technologies, which empower data scientists to forecast possible system failures. The goal of this approach is to cut expenses in preventive maintenance by automating the detection of possible failures and to improve efficiency and quality of maintenance measures. Additionally, a centralization of the sensor data monitoring can be achieved by using this approach. This paper describes the approach of three students to define a reference architecture for a predictive maintenance solution in the internet of things domain with a connected smartphone app for service technicians. The reference architecture is validated by a case study. The case study is implemented with current Microsoft Azure cloud technologies. The results of the case study show that the reference architecture is valid and can be used to achieve a system for predictive maintenance execution with the cloud components of Microsoft Azure. The used concepts are technology platform agnostic and can be reused in many different cloud platforms. The reference architecture is valid and can be used in many use cases, like gas station maintenance, elevator maintenance and many more.Keywords: case study, internet of things, predictive maintenance, reference architecture
Procedia PDF Downloads 25223518 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction
Authors: Yan Zhang
Abstract:
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.Keywords: Internet of Things, machine learning, predictive maintenance, streaming data
Procedia PDF Downloads 38623517 Technology of Gyro Orientation Measurement Unit (Gyro Omu) for Underground Utility Mapping Practice
Authors: Mohd Ruzlin Mohd Mokhtar
Abstract:
At present, most operators who are working on projects for utilities such as power, water, oil, gas, telecommunication and sewerage are using technologies e.g. Total station, Global Positioning System (GPS), Electromagnetic Locator (EML) and Ground Penetrating Radar (GPR) to perform underground utility mapping. With the increase in popularity of Horizontal Directional Drilling (HDD) method among the local authorities and asset owners, most of newly installed underground utilities need to use the HDD method. HDD method is seen as simple and create not much disturbance to the public and traffic. Thus, it was the preferred utilities installation method in most of areas especially in urban areas. HDDs were installed much deeper than exiting utilities (some reports saying that HDD is averaging 5 meter in depth). However, this impacts the accuracy or ability of existing underground utility mapping technologies. In most of Malaysia underground soil condition, those technologies were limited to maximum of 3 meter depth. Thus, those utilities which were installed much deeper than 3 meter depth could not be detected by using existing detection tools. The accuracy and reliability of existing underground utility mapping technologies or work procedure were in doubt. Thus, a mitigation action plan is required. While installing new utility using Horizontal Directional Drilling (HDD) method, a more accurate underground utility mapping can be achieved by using Gyro OMU compared to existing practice using e.g. EML and GPR. Gyro OMU is a method to accurately identify the location of HDD thus this mapping can be used or referred to avoid those cost of breakdown due to future HDD works which can be caused by inaccurate underground utility mapping.Keywords: Gyro Orientation Measurement Unit (Gyro OMU), Horizontal Directional Drilling (HDD), Ground Penetrating Radar (GPR), Electromagnetic Locator (EML)
Procedia PDF Downloads 14023516 Road Condition Monitoring Using Built-in Vehicle Technology Data, Drones, and Deep Learning
Authors: Judith Mwakalonge, Geophrey Mbatta, Saidi Siuhi, Gurcan Comert, Cuthbert Ruseruka
Abstract:
Transportation agencies worldwide continuously monitor their roads' conditions to minimize road maintenance costs and maintain public safety and rideability quality. Existing methods for carrying out road condition surveys involve manual observations of roads using standard survey forms done by qualified road condition surveyors or engineers either on foot or by vehicle. Automated road condition survey vehicles exist; however, they are very expensive since they require special vehicles equipped with sensors for data collection together with data processing and computing devices. The manual methods are expensive, time-consuming, infrequent, and can hardly provide real-time information for road conditions. This study contributes to this arena by utilizing built-in vehicle technologies, drones, and deep learning to automate road condition surveys while using low-cost technology. A single model is trained to capture flexible pavement distresses (Potholes, Rutting, Cracking, and raveling), thereby providing a more cost-effective and efficient road condition monitoring approach that can also provide real-time road conditions. Additionally, data fusion is employed to enhance the road condition assessment with data from vehicles and drones.Keywords: road conditions, built-in vehicle technology, deep learning, drones
Procedia PDF Downloads 12423515 Enhancing Student Learning Outcomes Using Engineering Design Process: Case Study in Physics Course
Authors: Thien Van Ngo
Abstract:
The engineering design process is a systematic approach to solving problems. It involves identifying a problem, brainstorming solutions, prototyping and testing solutions, and evaluating the results. The engineering design process can be used to teach students how to solve problems in a creative and innovative way. The research aim of this study was to investigate the effectiveness of using the engineering design process to enhance student learning outcomes in a physics course. A mixed research method was used in this study. The quantitative data were collected using a pretest-posttest control group design. The qualitative data were collected using semi-structured interviews. The sample was 150 first-year students in the Department of Mechanical Engineering Technology at Cao Thang Technical College in Vietnam in the 2022-2023 school year. The quantitative data were collected using a pretest-posttest control group design. The pretest was administered to both groups at the beginning of the study. The posttest was administered to both groups at the end of the study. The qualitative data were collected using semi-structured interviews with a sample of eight students in the experimental group. The interviews were conducted after the posttest. The quantitative data were analyzed using independent sample T-tests. The qualitative data were analyzed using thematic analysis. The quantitative data showed that students in the experimental group, who were taught using the engineering design process, had significantly higher post-test scores on physics problem-solving than students in the control group, who were taught using the conventional method. The qualitative data showed that students in the experimental group were more motivated and engaged in the learning process than students in the control group. Students in the experimental group also reported that they found the engineering design process to be a more effective way of learning physics. The findings of this study suggest that the engineering design process can be an effective way of enhancing student learning outcomes in physics courses. The engineering design process engages students in the learning process and helps them to develop problem-solving skills.Keywords: engineering design process, problem-solving, learning outcome of physics, students’ physics competencies, deep learning
Procedia PDF Downloads 6523514 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank
Authors: Jalal Haghighat Monfared, Zahra Akbari
Abstract:
Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.Keywords: business intelligence, business intelligence capability, decision making, decision quality
Procedia PDF Downloads 11223513 Modelling of Geotechnical Data Using Geographic Information System and MATLAB for Eastern Ahmedabad City, Gujarat
Authors: Rahul Patel
Abstract:
Ahmedabad, a city located in western India, is experiencing rapid growth due to urbanization and industrialization. It is projected to become a metropolitan city in the near future, resulting in various construction activities. Soil testing is necessary before construction can commence, requiring construction companies and contractors to periodically conduct soil testing. The focus of this study is on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical (Geo)-database involves three steps: collecting borehole data from reputable sources, verifying the accuracy and redundancy of the data, and standardizing and organizing the geotechnical information for integration into the database. Once the database is complete, it is integrated with GIS, allowing users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. This GIS map enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This study highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers.Keywords: ArcGIS, borehole data, geographic information system, geo-database, interpolation, SPT N-value, soil classification, Φ-Value, bearing capacity
Procedia PDF Downloads 7423512 Using TRACE and SNAP Codes to Establish the Model of Maanshan PWR for SBO Accident
Authors: B. R. Shen, J. R. Wang, J. H. Yang, S. W. Chen, C. Shih, Y. Chiang, Y. F. Chang, Y. H. Huang
Abstract:
In this research, TRACE code with the interface code-SNAP was used to simulate and analyze the SBO (station blackout) accident which occurred in Maanshan PWR (pressurized water reactor) nuclear power plant (NPP). There are four main steps in this research. First, the SBO accident data of Maanshan NPP were collected. Second, the TRACE/SNAP model of Maanshan NPP was established by using these data. Third, this TRACE/SNAP model was used to perform the simulation and analysis of SBO accident. Finally, the simulation and analysis of SBO with mitigation equipments was performed. The analysis results of TRACE are consistent with the data of Maanshan NPP. The mitigation equipments of Maanshan can maintain the safety of Maanshan in the SBO according to the TRACE predictions.Keywords: pressurized water reactor (PWR), TRACE, station blackout (SBO), Maanshan
Procedia PDF Downloads 19423511 A Comparative and Doctrinal Analysis towards the Investigation of a Right to Be Forgotten in Hong Kong
Authors: Jojo Y. C. Mo
Abstract:
Memories are good. They remind us of people, places and experiences that we cherish. But memories cannot be changed and there may well be memories that we do not want to remember. This is particularly true in relation to information which causes us embarrassment and humiliation or simply because it is private – we all want to erase or delete such information. This desire to delete is recently recognised by the Court of Justice of the European Union in the 2014 case of Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González in which the court ordered Google to remove links to some information about the complainant which he wished to be removed. This so-called ‘right to be forgotten’ received serious attention and significantly, the European Council and the European Parliament enacted the General Data Protection Regulation (GDPR) to provide a more structured and normative framework for implementation of right to be forgotten across the EU. This development in data protection laws will, undoubtedly, have significant impact on companies and co-operations not just within the EU but outside as well. Hong Kong, being one of the world’s leading financial and commercial center as well as one of the first jurisdictions in Asia to implement a comprehensive piece of data protection legislation, is therefore a jurisdiction that is worth looking into. This article/project aims to investigate the following: a) whether there is a right to be forgotten under the existing Hong Kong data protection legislation b) if not, whether such a provision is necessary and why. This article utilises a comparative methodology based on a study of primary and secondary resources, including scholarly articles, government and law commission reports and working papers and relevant international treaties, constitutional documents, case law and legislation. The author will primarily engage literature and case-law review as well as comparative and doctrinal analyses. The completion of this article will provide privacy researchers with more concrete principles and data to conduct further research on privacy and data protection in Hong Kong and internationally and will provide a basis for policy makers in assessing the rationale and need for a right to be forgotten in Hong Kong.Keywords: privacy, right to be forgotten, data protection, Hong Kong
Procedia PDF Downloads 19023510 Damage Assessment Based on Full-Polarimetric Decompositions in the 2017 Colombia Landslide
Authors: Hyeongju Jeon, Yonghyun Kim, Yongil Kim
Abstract:
Synthetic Aperture Radar (SAR) is an effective tool for damage assessment induced by disasters due to its all-weather and night/day acquisition capability. In this paper, the 2017 Colombia landslide was observed using full-polarimetric ALOS/PALSAR-2 data. Polarimetric decompositions, including the Freeman-Durden decomposition and the Cloude decomposition, are utilized to analyze the scattering mechanisms changes before and after-landslide. These analyses are used to detect the damaged areas induced by the landslide. Experimental results validate the efficiency of the full polarimetric SAR data since the damaged areas can be well discriminated. Thus, we can conclude the proposed method using full polarimetric data has great potential for damage assessment of landslides.Keywords: Synthetic Aperture Radar (SAR), polarimetric decomposition, damage assessment, landslide
Procedia PDF Downloads 39023509 Using Historical Data for Stock Prediction
Authors: Sofia Stoica
Abstract:
In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.Keywords: finance, machine learning, opening price, stock market
Procedia PDF Downloads 19023508 Supervised Learning for Cyber Threat Intelligence
Authors: Jihen Bennaceur, Wissem Zouaghi, Ali Mabrouk
Abstract:
The major aim of cyber threat intelligence (CTI) is to provide sophisticated knowledge about cybersecurity threats to ensure internal and external safeguards against modern cyberattacks. Inaccurate, incomplete, outdated, and invaluable threat intelligence is the main problem. Therefore, data analysis based on AI algorithms is one of the emergent solutions to overcome the threat of information-sharing issues. In this paper, we propose a supervised machine learning-based algorithm to improve threat information sharing by providing a sophisticated classification of cyber threats and data. Extensive simulations investigate the accuracy, precision, recall, f1-score, and support overall to validate the designed algorithm and to compare it with several supervised machine learning algorithms.Keywords: threat information sharing, supervised learning, data classification, performance evaluation
Procedia PDF Downloads 15023507 Methodologies, Findings, Discussion, and Limitations in Global, Multi-Lingual Research: We Are All Alone - Chinese Internet Drama
Authors: Patricia Portugal Marques de Carvalho Lourenco
Abstract:
A three-phase methodological multi-lingual path was designed, constructed and carried out using the 2020 Chinese Internet Drama Series We Are All Alone as a case study. Phase one, the backbone of the research, comprised of secondary data analysis, providing the structure on which the next two phases would be built on. Phase one incorporated a Google Scholar and a Baidu Index analysis, Star Network Influence Index and Mydramalist.com top two drama reviews, along with an article written about the drama and scrutiny of Chinese related blogs and websites. Phase two was field research elaborated across Latin Europe, and phase three was social media focused, having into account that perceptions are going to be memory conditioned based on past ideas recall. Overall, research has shown the poor cultural expression of Chinese entertainment in Latin Europe and demonstrated the inexistence of Chinese content in French, Italian, Portuguese and Spanish Business to Consumer retailers; a reflection of their low significance in Latin European markets and the short-life cycle of entertainment products in general, bubble-gum, disposable goods without a mid to long-term effect in consumers lives. The process of conducting comprehensive international research was complex and time-consuming, with data not always available in Mandarin, the researcher’s linguistic deficiency, limited Chinese Cultural Knowledge and cultural equivalence. Despite steps being taken to minimize the international proposed research, theoretical limitations concurrent to Latin Europe and China still occurred. Data accuracy was disputable; sampling, data collection/analysis methods are heterogeneous; ascertaining data requirements and the method of analysis to achieve a construct equivalence was challenging and morose to operationalize. Secondary data was also not often readily available in Mandarin; yet, in spite of the array of limitations, research was done, and results were produced.Keywords: research methodologies, international research, primary data, secondary data, research limitations, online dramas, china, latin europe
Procedia PDF Downloads 6823506 Node Insertion in Coalescence Hidden-Variable Fractal Interpolation Surface
Authors: Srijanani Anurag Prasad
Abstract:
The Coalescence Hidden-variable Fractal Interpolation Surface (CHFIS) was built by combining interpolation data from the Iterated Function System (IFS). The interpolation data in a CHFIS comprises a row and/or column of uncertain values when a single point is entered. Alternatively, a row and/or column of additional points are placed in the given interpolation data to demonstrate the node added CHFIS. There are three techniques for inserting new points that correspond to the row and/or column of nodes inserted, and each method is further classified into four types based on the values of the inserted nodes. As a result, numerous forms of node insertion can be found in a CHFIS.Keywords: fractal, interpolation, iterated function system, coalescence, node insertion, knot insertion
Procedia PDF Downloads 10023505 Assessing Walkability in New Cities around Cairo
Authors: Lobna Ahmed Galal
Abstract:
Modal integration has given minimal consideration in cities of developing countries, as well as the declining dominance of public transport, and predominance of informal transport, the modal share of informal taxis in greater Cairo has increased from 6% in 1987 to 37% in 2001 and this has since risen even higher, informal and non-motorized modes of transport acting as a gap filler by feeding other modes of transport, not by design or choice, but often by lack of accessible and affordable public transport. Yet non-motorized transport is peripheral, with minimal priority in urban planning and investments, lacking of strong polices to support non-motorized transport, for authorities development is associated with technology and motorized transport, and promotion of non-motorized transport may be considered corresponding to development, as well as social stigma against non-motorized transport, as it is seen a travel mode for the poor. Cairo as a city of a developing country, has poor quality infrastructure for non-motorized transport, suffering from absence of dedicated corridors, and when existing they are often encroached for commercial purposes, widening traffic lanes at the expense of sidewalks, absence of footpaths, or being overcrowded, poor lighting, making walking unsafe and yet, lack of financial supply to such facilities as it is often considered beyond city capabilities. This paper will deal with the objective measuring of the built environment relating to walking, in some neighborhoods of new cities around Cairo, In addition to comparing the results of the objective measures of the built environment with the level of self-reported survey. The first paper's objective is to show how the index ‘walkability of community neighborhoods’ works in the contexts in neighborhoods of new cities around Cairo. The procedure of objective measuring is of a high potential to be carried out by using GIS.Keywords: assessing, built environment, Cairo, walkability
Procedia PDF Downloads 38323504 Optimizing the Efficiency of Measuring Instruments in Ouagadougou-Burkina Faso
Authors: Moses Emetere, Marvel Akinyemi, S. E. Sanni
Abstract:
At the moment, AERONET or AMMA database shows a large volume of data loss. With only about 47% data set available to the scientist, it is evident that accurate nowcast or forecast cannot be guaranteed. The calibration constants of most radiosonde or weather stations are not compatible with the atmospheric conditions of the West African climate. A dispersion model was developed to incorporate salient mathematical representations like a Unified number. The Unified number was derived to describe the turbulence of the aerosols transport in the frictional layer of the lower atmosphere. Fourteen years data set from Multi-angle Imaging SpectroRadiometer (MISR) was tested using the dispersion model. A yearly estimation of the atmospheric constants over Ouagadougou using the model was obtained with about 87.5% accuracy. It further revealed that the average atmospheric constant for Ouagadougou-Niger is a_1 = 0.626, a_2 = 0.7999 and the tuning constants is n_1 = 0.09835 and n_2 = 0.266. Also, the yearly atmospheric constants affirmed the lower atmosphere of Ouagadougou is very dynamic. Hence, it is recommended that radiosonde and weather station manufacturers should constantly review the atmospheric constant over a geographical location to enable about eighty percent data retrieval.Keywords: aerosols retention, aerosols loading, statistics, analytical technique
Procedia PDF Downloads 31523503 Modern Imputation Technique for Missing Data in Linear Functional Relationship Model
Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, Rahmatullah Imon
Abstract:
Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in the LFRM. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators
Procedia PDF Downloads 39923502 Imputing Missing Data in Electronic Health Records: A Comparison of Linear and Non-Linear Imputation Models
Authors: Alireza Vafaei Sadr, Vida Abedi, Jiang Li, Ramin Zand
Abstract:
Missing data is a common challenge in medical research and can lead to biased or incomplete results. When the data bias leaks into models, it further exacerbates health disparities; biased algorithms can lead to misclassification and reduced resource allocation and monitoring as part of prevention strategies for certain minorities and vulnerable segments of patient populations, which in turn further reduce data footprint from the same population – thus, a vicious cycle. This study compares the performance of six imputation techniques grouped into Linear and Non-Linear models on two different realworld electronic health records (EHRs) datasets, representing 17864 patient records. The mean absolute percentage error (MAPE) and root mean squared error (RMSE) are used as performance metrics, and the results show that the Linear models outperformed the Non-Linear models in terms of both metrics. These results suggest that sometimes Linear models might be an optimal choice for imputation in laboratory variables in terms of imputation efficiency and uncertainty of predicted values.Keywords: EHR, machine learning, imputation, laboratory variables, algorithmic bias
Procedia PDF Downloads 8523501 Pavement Failures and Its Maintenance
Authors: Maulik L. Sisodia, Tirth K. Raval, Aarsh S. Mistry
Abstract:
This paper summarizes the ongoing researches about the defects in both flexible and rigid pavement and the maintenance in both flexible and rigid pavements. Various defects in pavements have been identified since the existence of both flexible and rigid pavement. Flexible Pavement failure is defined in terms of decreasing serviceability caused by the development of cracks, ruts, potholes etc. Flexible Pavement structure can be destroyed in a single season due to water penetration. Defects in flexible pavements is a problem of multiple dimensions, phenomenal growth of vehicular traffic (in terms of no. of axle loading of commercial vehicles), the rapid expansion in the road network, non-availability of suitable technology, material, equipment, skilled labor and poor funds allocation have all added complexities to the problem of flexible pavements. In rigid pavements due to different type of destress the failure like joint spalling, faulting, shrinkage cracking, punch out, corner break etc. Application of correction in the existing surface will enhance the life of maintenance works as well as that of strengthening layer. Maintenance of a road network involves a variety of operations, i.e., identification of deficiencies and planning, programming and scheduling for actual implementation in the field and monitoring. The essential objective should be to keep the road surface and appurtenances in good condition and to extend the life of the road assets to its design life. The paper describes lessons learnt from pavement failures and problems experienced during the last few years on a number of projects in India. Broadly, the activities include identification of defects and the possible cause there off, determination of appropriate remedial measures; implement these in the field and monitoring of the results.Keywords: Flexible Pavements, Rigid Pavements, Defects, Maintenance
Procedia PDF Downloads 17223500 Shape Management Method of Large Structure Based on Octree Space Partitioning
Authors: Gichun Cha, Changgil Lee, Seunghee Park
Abstract:
The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning
Procedia PDF Downloads 29723499 Exploiting Kinetic and Kinematic Data to Plot Cyclograms for Managing the Rehabilitation Process of BKAs by Applying Neural Networks
Authors: L. Parisi
Abstract:
Kinematic data wisely correlate vector quantities in space to scalar parameters in time to assess the degree of symmetry between the intact limb and the amputated limb with respect to a normal model derived from the gait of control group participants. Furthermore, these particular data allow a doctor to preliminarily evaluate the usefulness of a certain rehabilitation therapy. Kinetic curves allow the analysis of ground reaction forces (GRFs) to assess the appropriateness of human motion. Electromyography (EMG) allows the analysis of the fundamental lower limb force contributions to quantify the level of gait asymmetry. However, the use of this technological tool is expensive and requires patient’s hospitalization. This research work suggests overcoming the above limitations by applying artificial neural networks.Keywords: kinetics, kinematics, cyclograms, neural networks, transtibial amputation
Procedia PDF Downloads 443