Search results for: missing data estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26530

Search results for: missing data estimation

23740 AI-Based Technologies in International Arbitration: An Exploratory Study on the Practicability of Applying AI Tools in International Arbitration

Authors: Annabelle Onyefulu-Kingston

Abstract:

One of the major purposes of AI today is to evaluate and analyze millions of micro and macro data in order to determine what is relevant in a particular case and proffer it in an adequate manner. Microdata, as far as it relates to AI in international arbitration, is the millions of key issues specifically mentioned by either one or both parties or by their counsels, arbitrators, or arbitral tribunals in arbitral proceedings. This can be qualifications of expert witness and admissibility of evidence, amongst others. Macro data, on the other hand, refers to data derived from the resolution of the dispute and, consequently, the final and binding award. A notable example of this includes the rationale of the award and specific and general damages awarded, amongst others. This paper aims to critically evaluate and analyze the possibility of technological inclusion in international arbitration. This research will be imploring the qualitative method by evaluating existing literature on the consequence of applying AI to both micro and macro data in international arbitration, and how this can be of assistance to parties, counsels, and arbitrators.

Keywords: AI-based technologies, algorithms, arbitrators, international arbitration

Procedia PDF Downloads 101
23739 A Virtual Grid Based Energy Efficient Data Gathering Scheme for Heterogeneous Sensor Networks

Authors: Siddhartha Chauhan, Nitin Kumar Kotania

Abstract:

Traditional Wireless Sensor Networks (WSNs) generally use static sinks to collect data from the sensor nodes via multiple forwarding. Therefore, network suffers with some problems like long message relay time, bottle neck problem which reduces the performance of the network. Many approaches have been proposed to prevent this problem with the help of mobile sink to collect the data from the sensor nodes, but these approaches still suffer from the buffer overflow problem due to limited memory size of sensor nodes. This paper proposes an energy efficient scheme for data gathering which overcomes the buffer overflow problem. The proposed scheme creates virtual grid structure of heterogeneous nodes. Scheme has been designed for sensor nodes having variable sensing rate. Every node finds out its buffer overflow time and on the basis of this cluster heads are elected. A controlled traversing approach is used by the proposed scheme in order to transmit data to sink. The effectiveness of the proposed scheme is verified by simulation.

Keywords: buffer overflow problem, mobile sink, virtual grid, wireless sensor networks

Procedia PDF Downloads 393
23738 Challenge of Baseline Hydrology Estimation at Large-Scale Watersheds

Authors: Can Liu, Graham Markowitz, John Balay, Ben Pratt

Abstract:

Baseline or natural hydrology is commonly employed for hydrologic modeling and quantification of hydrologic alteration due to manmade activities. It can inform planning and policy related efforts for various state and federal water resource agencies to restore natural streamflow flow regimes. A common challenge faced by hydrologists is how to replicate unaltered streamflow conditions, particularly in large watershed settings prone to development and regulation. Three different methods were employed to estimate baseline streamflow conditions for 6 major subbasins the Susquehanna River Basin; those being: 1) incorporation of consumptive water use and reservoir operations back into regulated gaged records; 2) using a map correlation method and flow duration (exceedance probability) regression equations; 3) extending the pre-regulation streamflow records based on the relationship between concurrent streamflows at unregulated and regulated gage locations. Parallel analyses were perform among the three methods and limitations associated with each are presented. Results from these analyses indicate that generating baseline streamflow records at large-scale watersheds remain challenging, even with long-term continuous stream gage records available.

Keywords: baseline hydrology, streamflow gage, subbasin, regression

Procedia PDF Downloads 325
23737 Sensitivity to Misusing Verb Inflections in Both Finite and Non-Finite Clauses in Native and Non-Native Russian: A Self-Paced Reading Investigation

Authors: Yang Cao

Abstract:

Analyzing the oral production of Chinese-speaking learners of English as a second language (L2), we can find a large variety of verb inflections – Why does it seem so hard for them to use consistent correct past morphologies in obligatory past contexts? Failed Functional Features Hypothesis (FFFH) attributes the rather non-target-like performance to the absence of [±past] feature in their L1 Chinese, arguing that for post puberty learners, new features in L2 are no more accessible. By contrast, Missing Surface Inflection Hypothesis (MSIH) tends to believe that all features are actually acquirable for late L2 learners, while due to the mapping difficulties from features to forms, it is hard for them to realize the consistent past morphologies on the surface. However, most of the studies are limited to the verb morphologies in finite clauses and few studies have ever attempted to figure out these learners’ performance in non-finite clauses. Additionally, it has been discussed that Chinese learners may be able to tell the finite/infinite distinction (i.e. the [±finite] feature might be selected in Chinese, even though the existence of [±past] is denied). Therefore, adopting a self-paced reading task (SPR), the current study aims to analyze the processing patterns of Chinese-speaking learners of L2 Russian, in order to find out if they are sensitive to misuse of tense morphologies in both finite and non-finite clauses and whether they are sensitive to the finite/infinite distinction presented in Russian. The study targets L2 Russian due to its systematic morphologies in both present and past tenses. A native Russian group, as well as a group of English-speaking learners of Russian, whose L1 has definitely selected both [±finite] and [±past] features, will also be involved. By comparing and contrasting performance of the three language groups, the study is going to further examine and discuss the two theories, FFFH and MSIH. Preliminary hypotheses are: a) Russian native speakers are expected to spend longer time reading the verb forms which violate the grammar; b) it is expected that Chinese participants are, at least, sensitive to the misuse of inflected verbs in non-finite clauses, although no sensitivity to the misuse of infinitives in finite clauses might be found. Therefore, an interaction of finite and grammaticality is expected to be found, which indicate that these learners are able to tell the finite/infinite distinction; and c) having selected [±finite] and [±past], English-speaking learners of Russian are expected to behave target-likely, supporting L1 transfer.

Keywords: features, finite clauses, morphosyntax, non-finite clauses, past morphologies, present morphologies, Second Language Acquisition, self-paced reading task, verb inflections

Procedia PDF Downloads 111
23736 Digitalization, Economic Growth and Financial Sector Development in Africa

Authors: Abdul Ganiyu Iddrisu

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, economic growth, financial sector development, Africa

Procedia PDF Downloads 106
23735 Information Communication Technology Based Road Traffic Accidents’ Identification, and Related Smart Solution Utilizing Big Data

Authors: Ghulam Haider Haidaree, Nsenda Lukumwena

Abstract:

Today the world of research enjoys abundant data, available in virtually any field, technology, science, and business, politics, etc. This is commonly referred to as big data. This offers a great deal of precision and accuracy, supportive of an in-depth look at any decision-making process. When and if well used, Big Data affords its users with the opportunity to produce substantially well supported and good results. This paper leans extensively on big data to investigate possible smart solutions to urban mobility and related issues, namely road traffic accidents, its casualties, and fatalities based on multiple factors, including age, gender, location occurrences of accidents, etc. Multiple technologies were used in combination to produce an Information Communication Technology (ICT) based solution with embedded technology. Those technologies include principally Geographic Information System (GIS), Orange Data Mining Software, Bayesian Statistics, to name a few. The study uses the Leeds accident 2016 to illustrate the thinking process and extracts thereof a model that can be tested, evaluated, and replicated. The authors optimistically believe that the proposed model will significantly and smartly help to flatten the curve of road traffic accidents in the fast-growing population densities, which increases considerably motor-based mobility.

Keywords: accident factors, geographic information system, information communication technology, mobility

Procedia PDF Downloads 209
23734 Analysis of ECGs Survey Data by Applying Clustering Algorithm

Authors: Irum Matloob, Shoab Ahmad Khan, Fahim Arif

Abstract:

As Indo-pak has been the victim of heart diseases since many decades. Many surveys showed that percentage of cardiac patients is increasing in Pakistan day by day, and special attention is needed to pay on this issue. The framework is proposed for performing detailed analysis of ECG survey data which is conducted for measuring the prevalence of heart diseases statistics in Pakistan. The ECG survey data is evaluated or filtered by using automated Minnesota codes and only those ECGs are used for further analysis which is fulfilling the standardized conditions mentioned in the Minnesota codes. Then feature selection is performed by applying proposed algorithm based on discernibility matrix, for selecting relevant features from the database. Clustering is performed for exposing natural clusters from the ECG survey data by applying spectral clustering algorithm using fuzzy c means algorithm. The hidden patterns and interesting relationships which have been exposed after this analysis are useful for further detailed analysis and for many other multiple purposes.

Keywords: arrhythmias, centroids, ECG, clustering, discernibility matrix

Procedia PDF Downloads 352
23733 The Impact of Motivation on Employee Performance in South Korea

Authors: Atabong Awung Lekeazem

Abstract:

The purpose of this paper is to identify the impact or role of incentives on employee’s performance with a particular emphasis on Korean workers. The process involves defining and explaining the different types of motivation. In defining them, we also bring out the difference between the two major types of motivations. The second phase of the paper shall involve gathering data/information from a sample population and then analyzing the data. In the analysis, we shall get to see the almost similar mentality or value which Koreans attach to motivation, which a slide different view coming only from top management personnel. The last phase shall have us presenting the data and coming to a conclusion from which possible knowledge on how managers and potential managers can ignite the best out of their employees.

Keywords: motivation, employee’s performance, Korean workers, business information systems

Procedia PDF Downloads 416
23732 Improved Classification Procedure for Imbalanced and Overlapped Situations

Authors: Hankyu Lee, Seoung Bum Kim

Abstract:

The issue with imbalance and overlapping in the class distribution becomes important in various applications of data mining. The imbalanced dataset is a special case in classification problems in which the number of observations of one class (i.e., major class) heavily exceeds the number of observations of the other class (i.e., minor class). Overlapped dataset is the case where many observations are shared together between the two classes. Imbalanced and overlapped data can be frequently found in many real examples including fraud and abuse patients in healthcare, quality prediction in manufacturing, text classification, oil spill detection, remote sensing, and so on. The class imbalance and overlap problem is the challenging issue because this situation degrades the performance of most of the standard classification algorithms. In this study, we propose a classification procedure that can effectively handle imbalanced and overlapped datasets by splitting data space into three parts: nonoverlapping, light overlapping, and severe overlapping and applying the classification algorithm in each part. These three parts were determined based on the Hausdorff distance and the margin of the modified support vector machine. An experiments study was conducted to examine the properties of the proposed method and compared it with other classification algorithms. The results showed that the proposed method outperformed the competitors under various imbalanced and overlapped situations. Moreover, the applicability of the proposed method was demonstrated through the experiment with real data.

Keywords: classification, imbalanced data with class overlap, split data space, support vector machine

Procedia PDF Downloads 308
23731 Vegetation Index-Deduced Crop Coefficient of Wheat (Triticum aestivum) Using Remote Sensing: Case Study on Four Basins of Golestan Province, Iran

Authors: Hoda Zolfagharnejad, Behnam Kamkar, Omid Abdi

Abstract:

Crop coefficient (Kc) is an important factor contributing to estimation of evapotranspiration, and is also used to determine the irrigation schedule. This study investigated and determined the monthly Kc of winter wheat (Triticum aestivum L.) using five vegetation indices (VIs): Normalized Difference Vegetation Index (NDVI), Difference Vegetation Index (DVI), Soil Adjusted Vegetation Index (SAVI), Infrared Percentage Vegetation Index (IPVI), and Ratio Vegetation Index (RVI) of four basins in Golestan province, Iran. 14 Landsat-8 images according to crop growth stage were used to estimate monthly Kc of wheat. VIs were calculated based on infrared and near infrared bands of Landsat 8 images using Geographical Information System (GIS) software. The best VIs were chosen after establishing a regression relationship among these VIs with FAO Kc and Kc that was modified for the study area by the previous research based on R² and Root Mean Square Error (RMSE). The result showed that local modified SAVI with R²= 0.767 and RMSE= 0.174 was the best index to produce monthly wheat Kc maps.

Keywords: crop coefficient, remote sensing, vegetation indices, wheat

Procedia PDF Downloads 414
23730 Mapping of Geological Structures Using Aerial Photography

Authors: Ankit Sharma, Mudit Sachan, Anurag Prakash

Abstract:

Rapid growth in data acquisition technologies through drones, have led to advances and interests in collecting high-resolution images of geological fields. Being advantageous in capturing high volume of data in short flights, a number of challenges have to overcome for efficient analysis of this data, especially while data acquisition, image interpretation and processing. We introduce a method that allows effective mapping of geological fields using photogrammetric data of surfaces, drainage area, water bodies etc, which will be captured by airborne vehicles like UAVs, we are not taking satellite images because of problems in adequate resolution, time when it is captured may be 1 yr back, availability problem, difficult to capture exact image, then night vision etc. This method includes advanced automated image interpretation technology and human data interaction to model structures and. First Geological structures will be detected from the primary photographic dataset and the equivalent three dimensional structures would then be identified by digital elevation model. We can calculate dip and its direction by using the above information. The structural map will be generated by adopting a specified methodology starting from choosing the appropriate camera, camera’s mounting system, UAVs design ( based on the area and application), Challenge in air borne systems like Errors in image orientation, payload problem, mosaicing and geo referencing and registering of different images to applying DEM. The paper shows the potential of using our method for accurate and efficient modeling of geological structures, capture particularly from remote, of inaccessible and hazardous sites.

Keywords: digital elevation model, mapping, photogrammetric data analysis, geological structures

Procedia PDF Downloads 688
23729 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 131
23728 Inflammatory Cytokine (Interleukin-8): A Diagnostic Marker in Leukemia

Authors: Sandeep Pandey, Nimra Habib, Ranjana Singh, Abbas Ali Mahdi

Abstract:

Leukemia is a malignancy of blood that mainly affects children and young adults; while advancement in the early diagnosis will have the potential to improve the outcome of diseases. A wide range of disease including leukemia shows inflammatory signals in their pathogenesis. In a pilot study conducted in our laboratory, 52 people were screened, of which 26 had leukemia and 26 were free from any kind of malignancy. We performed the estimation of the inflammatory cytokine Interleukin-8 and it was found significantly raised in all the leukemia patients concerning healthy volunteers who participated in the study. Flow cytometry had been performed for the confirmation of leukemia and further genomic, and proteomic, analyses of the sample revealed that IL-8 levels showed a positive correlation in patients with leukemia. The results had shown constitutive secretion of interleukin-8 by leukemia cells. So, our finding demonstrated that IL-8 is considered to have a role in the pathogenesis of leukemia, and quantification of IL-8 levels in leukemia conditions might be more useful and feasible in the clinical setting for the prediction of drug responses where it may represent a putative target for innovative diagnostic toward effective therapeutic approaches. However, further research explorations in this area are needed that include a greater number of patients with all different forms of leukemia, and estimating their IL-8 levels may hold the key for the additional predictive values on the recurrence of leukemia and its prognosis.

Keywords: T-ALL, IL-8, leukemia pathogenesis, cancer therapeutics

Procedia PDF Downloads 73
23727 Predicting Seoul Bus Ridership Using Artificial Neural Network Algorithm with Smartcard Data

Authors: Hosuk Shin, Young-Hyun Seo, Eunhak Lee, Seung-Young Kho

Abstract:

Currently, in Seoul, users have the privilege to avoid riding crowded buses with the installation of Bus Information System (BIS). BIS has three levels of on-board bus ridership level information (spacious, normal, and crowded). However, there are flaws in the system due to it being real time which could provide incomplete information to the user. For example, a bus comes to the station, and on the BIS it shows that the bus is crowded, but on the stop that the user is waiting many people get off, which would mean that this station the information should show as normal or spacious. To fix this problem, this study predicts the bus ridership level using smart card data to provide more accurate information about the passenger ridership level on the bus. An Artificial Neural Network (ANN) is an interconnected group of nodes, that was created based on the human brain. Forecasting has been one of the major applications of ANN due to the data-driven self-adaptive methods of the algorithm itself. According to the results, the ANN algorithm was stable and robust with somewhat small error ratio, so the results were rational and reasonable.

Keywords: smartcard data, ANN, bus, ridership

Procedia PDF Downloads 170
23726 Combination of Artificial Neural Network Model and Geographic Information System for Prediction Water Quality

Authors: Sirilak Areerachakul

Abstract:

Water quality has initiated serious management efforts in many countries. Artificial Neural Network (ANN) models are developed as forecasting tools in predicting water quality trend based on historical data. This study endeavors to automatically classify water quality. The water quality classes are evaluated using 6 factor indices. These factors are pH value (pH), Dissolved Oxygen (DO), Biochemical Oxygen Demand (BOD), Nitrate Nitrogen (NO3N), Ammonia Nitrogen (NH3N) and Total Coliform (T-Coliform). The methodology involves applying data mining techniques using multilayer perceptron (MLP) neural network models. The data consisted of 11 sites of Saen Saep canal in Bangkok, Thailand. The data is obtained from the Department of Drainage and Sewerage Bangkok Metropolitan Administration during 2007-2011. The results of multilayer perceptron neural network exhibit a high accuracy multilayer perception rate at 94.23% in classifying the water quality of Saen Saep canal in Bangkok. Subsequently, this encouraging result could be combined with GIS data improves the classification accuracy significantly.

Keywords: artificial neural network, geographic information system, water quality, computer science

Procedia PDF Downloads 345
23725 Electronic Physical Activity Record (EPAR): Key for Data Driven Physical Activity Healthcare Services

Authors: Rishi Kanth Saripalle

Abstract:

Medical experts highly recommend to include physical activity in everyone’s daily routine irrespective of gender or age as it helps to improve various medical issues or curb potential issues. Simultaneously, experts are also diligently trying to provide various healthcare services (interventions, plans, exercise routines, etc.) for promoting healthy living and increasing physical activity in one’s ever increasing hectic schedules. With the introduction of wearables, individuals are able to keep track, analyze, and visualize their daily physical activities. However, there seems to be no common agreed standard for representing, gathering, aggregating and analyzing an individual’s physical activity data from disparate multiple sources (exercise pans, multiple wearables, etc.). This issue makes it highly impractical to develop any data-driven physical activity applications and healthcare programs. Further, the inability to integrate the physical activity data into an individual’s Electronic Health Record to provide a wholistic image of that individual’s health is still eluding the experts. This article has identified three primary reasons for this potential issue. First, there is no agreed standard, both structure and semantic, for representing and sharing physical activity data across disparate systems. Second, various organizations (e.g., LA fitness, Gold’s Gym, etc.) and research backed interventions and programs still primarily rely on paper or unstructured format (such as text or notes) to keep track of the data generated from physical activities. Finally, most of the wearable devices operate in silos. This article identifies the underlying problem, explores the idea of reusing existing standards, and identifies the essential modules required to move forward.

Keywords: electronic physical activity record, physical activity in EHR EIM, tracking physical activity data, physical activity data standards

Procedia PDF Downloads 285
23724 Discussion on Dispersion Curves of Non-penetrable Soils from in-Situ Seismic Dilatometer Measurements

Authors: Angelo Aloisio Dag, Pasquale Pasca, Massimo Fragiacomo, Ferdinando Totani, Gianfranco Totani

Abstract:

The estimate of the velocity of shear waves (Vs) is essential in seismic engineering to characterize the dynamic response of soils. There are various direct methods to estimate the Vs. The authors report the results of site characterization in Macerata, where they measured the Vs using the seismic dilatometer in a 100m deep borehole. The standard Vs estimation originates from the cross-correlation between the signals acquired by two geophones at increasing depths. This paper focuses on the estimate of the dependence of Vs on the wavenumber. The dispersion curves reveal an unexpected hyperbolic dispersion curve typical of Lamb waves. Interestingly, the contribution of Lamb waves may be notable up to 100m depth. The amplitude of surface waves decrease rapidly with depth: still, their influence may be essential up to depths considered unusual for standard geotechnical investigations, where their effect is generally neglected. Accordingly, these waves may bias the outcomes of the standard Vs estimations, which ignore frequency-dependent phenomena. The paper proposes an enhancement of the accepted procedure to estimate Vs and addresses the importance of Lamb waves in soil characterization.

Keywords: dispersion curve, seismic dilatometer, shear wave, soil mechanics

Procedia PDF Downloads 177
23723 Developing Pavement Structural Deterioration Curves

Authors: Gregory Kelly, Gary Chai, Sittampalam Manoharan, Deborah Delaney

Abstract:

A Structural Number (SN) can be calculated for a road pavement from the properties and thicknesses of the surface, base course, sub-base, and subgrade. Historically, the cost of collecting structural data has been very high. Data were initially collected using Benkelman Beams and now by Falling Weight Deflectometer (FWD). The structural strength of pavements weakens over time due to environmental and traffic loading factors, but due to a lack of data, no structural deterioration curve for pavements has been implemented in a Pavement Management System (PMS). International Roughness Index (IRI) is a measure of the road longitudinal profile and has been used as a proxy for a pavement’s structural integrity. This paper offers two conceptual methods to develop Pavement Structural Deterioration Curves (PSDC). Firstly, structural data are grouped in sets by design Equivalent Standard Axles (ESA). An ‘Initial’ SN (ISN), Intermediate SN’s (SNI) and a Terminal SN (TSN), are used to develop the curves. Using FWD data, the ISN is the SN after the pavement is rehabilitated (Financial Accounting ‘Modern Equivalent’). Intermediate SNIs, are SNs other than the ISN and TSN. The TSN was defined as the SN of the pavement when it was approved for pavement rehabilitation. The second method is to use Traffic Speed Deflectometer data (TSD). The road network already divided into road blocks, is grouped by traffic loading. For each traffic loading group, road blocks that have had a recent pavement rehabilitation, are used to calculate the ISN and those planned for pavement rehabilitation to calculate the TSN. The remaining SNs are used to complete the age-based or if available, historical traffic loading-based SNI’s.

Keywords: conceptual, pavement structural number, pavement structural deterioration curve, pavement management system

Procedia PDF Downloads 545
23722 Effects of Porosity Logs on Pore Connectivity and Volumetric Estimation

Authors: Segun S. Bodunde

Abstract:

In Bona Field, Niger Delta, two reservoirs across three wells were analyzed. The research aimed at determining the statistical dependence of permeability and oil volume in place on porosity logs. Of the three popular porosity logs, two were used; the sonic and density logs. The objectives of the research were to identify the porosity logs that vary more with location and direction, to visualize the depth trend of both logs and to determine the influence of these logs on pore connectivity determination and volumetric analysis. The focus was on density and sonic logs. It was observed that the sonic derived porosities were higher than the density derived porosities (in well two, across the two reservoir sands, sonic porosity averaged 30.8% while density derived porosity averaged 23.65%, and the same trend was observed in other wells.). The sonic logs were further observed to have lower co-efficient of variation when compared to the density logs (in sand A, well 2, sonic derived porosity had a co-efficient of variation of 12.15% compared to 22.52% from the density logs) indicating a lower tendency to vary with location and direction. The bulk density was observed to increase with depth while the transit time reduced with depth. It was also observed that for an 8.87% decrease in porosity, the pore connectivity was observed to decrease by about 38%.

Keywords: pore connectivity, co-efficient of variation, density derived porosity, sonic derived porosity

Procedia PDF Downloads 194
23721 Nilsson Model Performance in Estimating Bed Load Sediment, Case Study: Tale Zang Station

Authors: Nader Parsazadeh

Abstract:

The variety of bed sediment load relationships, insufficient information and data, and the influence of river conditions make the selection of an optimum relationship for a given river extremely difficult. Hence, in order to select the best formulae, the bed load equations should be evaluated. The affecting factors need to be scrutinized, and equations should be verified. Also, re-evaluation may be needed. In this research, sediment bed load of Dez Dam at Tal-e Zang Station has been studied. After reviewing the available references, the most common formulae were selected that included Meir-Peter and Muller, using MS Excel to compute and evaluate data. Then, 52 series of already measured data at the station were re-measured, and the sediment bed load was determined. 1. The calculated bed load obtained by different equations showed a great difference with that of measured data. 2. r difference ratio from 0.5 to 2.00 was 0% for all equations except for Nilsson and Shields equations while it was 61.5 and 59.6% for Nilsson and Shields equations, respectively. 3. By reviewing results and discarding probably erroneous measured data measurements (by human or machine), one may use Nilsson Equation due to its r value higher than 1 as an effective equation for estimating bed load at Tal-e Zang Station in order to predict activities that depend upon bed sediment load estimate to be determined. Also, since only few studies have been conducted so far, these results may be of assistance to the operators and consulting companies.

Keywords: bed load, empirical relation ship, sediment, Tale Zang Station

Procedia PDF Downloads 363
23720 Hierarchical Filtering Method of Threat Alerts Based on Correlation Analysis

Authors: Xudong He, Jian Wang, Jiqiang Liu, Lei Han, Yang Yu, Shaohua Lv

Abstract:

Nowadays, the threats of the internet are enormous and increasing; however, the classification of huge alert messages generated in this environment is relatively monotonous. It affects the accuracy of the network situation assessment, and also brings inconvenience to the security managers to deal with the emergency. In order to deal with potential network threats effectively and provide more effective data to improve the network situation awareness. It is essential to build a hierarchical filtering method to prevent the threats. In this paper, it establishes a model for data monitoring, which can filter systematically from the original data to get the grade of threats and be stored for using again. Firstly, it filters the vulnerable resources, open ports of host devices and services. Then use the entropy theory to calculate the performance changes of the host devices at the time of the threat occurring and filter again. At last, sort the changes of the performance value at the time of threat occurring. Use the alerts and performance data collected in the real network environment to evaluate and analyze. The comparative experimental analysis shows that the threat filtering method can effectively filter the threat alerts effectively.

Keywords: correlation analysis, hierarchical filtering, multisource data, network security

Procedia PDF Downloads 202
23719 Feedback Preference and Practice of English Majors’ in Pronunciation Instruction

Authors: Claerchille Jhulia Robin

Abstract:

This paper discusses the perspective of ESL learners towards pronunciation instruction. It sought to determine how these learners view the type of feedback their speech teacher gives and its impact on their own classroom practice of providing feedback. This study utilized a quantitative-qualitative approach to the problem. The respondents were Education students majoring in English. A survey questionnaire and interview guide were used for data gathering. The data from the survey was tabulated using frequency count and the data from the interview were then transcribed and analyzed. Results showed that ESL learners favor immediate corrective feedback and they do not find any issue in being corrected in front of their peers. They also practice the same corrective technique in their own classroom.

Keywords: ESL, feedback, learner perspective, pronunciation instruction

Procedia PDF Downloads 236
23718 Using Nature-Based Solutions to Decarbonize Buildings in Canadian Cities

Authors: Zahra Jandaghian, Mehdi Ghobadi, Michal Bartko, Alex Hayes, Marianne Armstrong, Alexandra Thompson, Michael Lacasse

Abstract:

The Intergovernmental Panel on Climate Change (IPCC) report stated the urgent need to cut greenhouse gas emissions to avoid the adverse impacts of climatic changes. The United Nations has forecasted that nearly 70 percent of people will live in urban areas by 2050 resulting in a doubling of the global building stock. Given that buildings are currently recognised as emitting 40 percent of global carbon emissions, there is thus an urgent incentive to decarbonize existing buildings and to build net-zero carbon buildings. To attain net zero carbon emissions in communities in the future requires action in two directions: I) reduction of emissions; and II) removal of on-going emissions from the atmosphere once de-carbonization measures have been implemented. Nature-based solutions (NBS) have a significant role to play in achieving net zero carbon communities, spanning both emission reductions and removal of on-going emissions. NBS for the decarbonisation of buildings can be achieved by using green roofs and green walls – increasing vertical and horizontal vegetation on the building envelopes – and using nature-based materials that either emit less heat to the atmosphere thus decreasing photochemical reaction rates, or store substantial amount of carbon during the whole building service life within their structure. The NBS approach can also mitigate urban flooding and overheating, improve urban climate and air quality, and provide better living conditions for the urban population. For existing buildings, de-carbonization mostly requires retrofitting existing envelopes efficiently to use NBS techniques whereas for future construction, de-carbonization involves designing new buildings with low carbon materials as well as having the integrity and system capacity to effectively employ NBS. This paper presents the opportunities and challenges in respect to the de-carbonization of buildings using NBS for both building retrofits and new construction. This review documents the effectiveness of NBS to de-carbonize Canadian buildings, identifies the missing links to implement these techniques in cold climatic conditions, and determine a road map and immediate approaches to mitigate the adverse impacts of climate change such as urban heat islanding. Recommendations are drafted for possible inclusion in the Canadian building and energy codes.

Keywords: decarbonization, nature-based solutions, GHG emissions, greenery enhancement, buildings

Procedia PDF Downloads 95
23717 Torsional Rigidities of Reinforced Concrete Beams Subjected to Elastic Lateral Torsional Buckling

Authors: Ilker Kalkan, Saruhan Kartal

Abstract:

Reinforced concrete (RC) beams rarely undergo lateral-torsional buckling (LTB), since these beams possess large lateral bending and torsional rigidities owing to their stocky cross-sections, unlike steel beams. However, the problem of LTB is becoming more and more pronounced in the last decades as the span lengths of concrete beams increase and the cross-sections become more slender with the use of pre-stressed concrete. The buckling moment of a beam mainly depends on its lateral bending rigidity and torsional rigidity. The nonhomogeneous and elastic-inelastic nature of RC complicates estimation of the buckling moments of concrete beams. Furthermore, the lateral bending and torsional rigidities of RC beams and the buckling moments are affected from different forms of concrete cracking, including flexural, torsional and restrained shrinkage cracking. The present study pertains to the effects of concrete cracking on the torsional rigidities of RC beams prone to elastic LTB. A series of tests on rather slender RC beams indicated that torsional cracking does not initiate until buckling in elastic LTB, while flexural cracking associated with lateral bending takes place even at the initial stages of loading. Hence, the present study clearly indicated that the un-cracked torsional rigidity needs to be used for estimating the buckling moments of RC beams liable to elastic LTB.

Keywords: lateral stability, post-cracking torsional rigidity, uncracked torsional rigidity, critical moment

Procedia PDF Downloads 237
23716 Automatic Tagging and Accuracy in Assamese Text Data

Authors: Chayanika Hazarika Bordoloi

Abstract:

This paper is an attempt to work on a highly inflectional language called Assamese. This is also one of the national languages of India and very little has been achieved in terms of computational research. Building a language processing tool for a natural language is not very smooth as the standard and language representation change at various levels. This paper presents inflectional suffixes of Assamese verbs and how the statistical tools, along with linguistic features, can improve the tagging accuracy. Conditional random fields (CRF tool) was used to automatically tag and train the text data; however, accuracy was improved after linguistic featured were fed into the training data. Assamese is a highly inflectional language; hence, it is challenging to standardizing its morphology. Inflectional suffixes are used as a feature of the text data. In order to analyze the inflections of Assamese word forms, a list of suffixes is prepared. This list comprises suffixes, comprising of all possible suffixes that various categories can take is prepared. Assamese words can be classified into inflected classes (noun, pronoun, adjective and verb) and un-inflected classes (adverb and particle). The corpus used for this morphological analysis has huge tokens. The corpus is a mixed corpus and it has given satisfactory accuracy. The accuracy rate of the tagger has gradually improved with the modified training data.

Keywords: CRF, morphology, tagging, tagset

Procedia PDF Downloads 196
23715 Simulation of Technological, Energy and GHG Comparison between a Conventional Diesel Bus and E-bus: Feasibility to Promote E-bus Change in High Lands Cities

Authors: Riofrio Jonathan, Fernandez Guillermo

Abstract:

Renewable energy represented around 80% of the energy matrix for power generation in Ecuador during 2020, so the deployment of current public policies is focused on taking advantage of the high presence of renewable sources to carry out several electrification projects. These projects are part of the portfolio sent to the United Nations Framework on Climate Change (UNFCCC) as a commitment to reduce greenhouse gas emissions (GHG) in the established national determined contribution (NDC). In this sense, the Ecuadorian Organic Energy Efficiency Law (LOEE) published in 2019 promotes E-mobility as one of the main milestones. In fact, it states that the new vehicles for urban and interurban usage must be E-buses since 2025. As a result, and for a successful implementation of this technological change in a national context, it is important to deploy land surveys focused on technical and geographical areas to keep the quality of services in both the electricity and transport sectors. Therefore, this research presents a technological and energy comparison between a conventional diesel bus and its equivalent E-bus. Both vehicles fulfill all the technical requirements to ride in the study-case city, which is Ambato in the province of Tungurahua-Ecuador. In addition, the analysis includes the development of a model for the energy estimation of both technologies that are especially applied in a highland city such as Ambato. The altimetry of the most important bus routes in the city varies from 2557 to 3200 m.a.s.l., respectively, for the lowest and highest points. These operation conditions provide a grade of novelty to this paper. Complementary, the technical specifications of diesel buses are defined following the common features of buses registered in Ambato. On the other hand, the specifications for E-buses come from the most common units introduced in Latin America because there is not enough evidence in similar cities at the moment. The achieved results will be good input data for decision-makers since electric demand forecast, energy savings, costs, and greenhouse gases emissions are computed. Indeed, GHG is important because it allows reporting the transparency framework that it is part of the Paris Agreement. Finally, the presented results correspond to stage I of the called project “Analysis and Prospective of Electromobility in Ecuador and Energy Mix towards 2030” supported by Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ).

Keywords: high altitude cities, energy planning, NDC, e-buses, e-mobility

Procedia PDF Downloads 153
23714 A Human Activity Recognition System Based on Sensory Data Related to Object Usage

Authors: M. Abdullah, Al-Wadud

Abstract:

Sensor-based activity recognition systems usually accounts which sensors have been activated to perform an activity. The system then combines the conditional probabilities of those sensors to represent different activities and takes the decision based on that. However, the information about the sensors which are not activated may also be of great help in deciding which activity has been performed. This paper proposes an approach where the sensory data related to both usage and non-usage of objects are utilized to make the classification of activities. Experimental results also show the promising performance of the proposed method.

Keywords: Naïve Bayesian, based classification, activity recognition, sensor data, object-usage model

Procedia PDF Downloads 324
23713 Application of Post-Stack and Pre-Stack Seismic Inversion for Prediction of Hydrocarbon Reservoirs in a Persian Gulf Gas Field

Authors: Nastaran Moosavi, Mohammad Mokhtari

Abstract:

Seismic inversion is a technique which has been in use for years and its main goal is to estimate and to model physical characteristics of rocks and fluids. Generally, it is a combination of seismic and well-log data. Seismic inversion can be carried out through different methods; we have conducted and compared post-stack and pre- stack seismic inversion methods on real data in one of the fields in the Persian Gulf. Pre-stack seismic inversion can transform seismic data to rock physics such as P-impedance, S-impedance and density. While post- stack seismic inversion can just estimate P-impedance. Then these parameters can be used in reservoir identification. Based on the results of inverting seismic data, a gas reservoir was detected in one of Hydrocarbon oil fields in south of Iran (Persian Gulf). By comparing post stack and pre-stack seismic inversion it can be concluded that the pre-stack seismic inversion provides a more reliable and detailed information for identification and prediction of hydrocarbon reservoirs.

Keywords: density, p-impedance, s-impedance, post-stack seismic inversion, pre-stack seismic inversion

Procedia PDF Downloads 326
23712 A Data-Driven Monitoring Technique Using Combined Anomaly Detectors

Authors: Fouzi Harrou, Ying Sun, Sofiane Khadraoui

Abstract:

Anomaly detection based on Principal Component Analysis (PCA) was studied intensively and largely applied to multivariate processes with highly cross-correlated process variables. Monitoring metrics such as the Hotelling's T2 and the Q statistics are usually used in PCA-based monitoring to elucidate the pattern variations in the principal and residual subspaces, respectively. However, these metrics are ill suited to detect small faults. In this paper, the Exponentially Weighted Moving Average (EWMA) based on the Q and T statistics, T2-EWMA and Q-EWMA, were developed for detecting faults in the process mean. The performance of the proposed methods was compared with that of the conventional PCA-based fault detection method using synthetic data. The results clearly show the benefit and the effectiveness of the proposed methods over the conventional PCA method, especially for detecting small faults in highly correlated multivariate data.

Keywords: data-driven method, process control, anomaly detection, dimensionality reduction

Procedia PDF Downloads 300
23711 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects

Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town

Abstract:

The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.

Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry

Procedia PDF Downloads 93