Search results for: panel data method
37642 Top-Down Construction Method in Concrete Structures: Advantages and Disadvantages of This Construction Method
Authors: Hadi Rouhi Belvirdi
Abstract:
The construction of underground structures using the traditional method, which begins with excavation and the implementation of the foundation of the underground structure, continues with the construction of the main structure from the ground up, and concludes with the completion of the final ceiling, is known as the Bottom-Up Method. In contrast to this method, there is an advanced technique called the Top-Down Method, which has practically replaced the traditional construction method in large projects in industrialized countries in recent years. Unlike the traditional approach, this method starts with the construction of surrounding walls, columns, and the final ceiling and is completed with the excavation and construction of the foundation of the underground structure. Some of the most significant advantages of this method include the elimination or minimization of formwork surfaces, the removal of temporary bracing during excavation, the creation of some traffic facilities during the construction of the structure, and the possibility of using it in limited and high-traffic urban spaces. Despite these numerous advantages, unfortunately, there is still insufficient awareness of this method in our country, to the extent that it can be confidently stated that most stakeholders in the construction industry are unaware of the existence of such a construction method. However, it can be utilized as a very important execution option alongside other conventional methods in the construction of underground structures. Therefore, due to the extensive practical capabilities of this method, this article aims to present a methodology for constructing underground structures based on the aforementioned advanced method to the scientific community of the country, examine the advantages and limitations of this method and their impacts on time and costs, and discuss its application in urban spaces. Finally, some underground structures executed in the Ahvaz urban rail, which are being implemented using this advanced method to the best of our best knowledge, will be introduced.Keywords: top-down method, bottom-up method, underground structure, construction method
Procedia PDF Downloads 1237641 Evaluation of Three Potato Cultivars for Processing (Crisp French Fries)
Authors: Hatim Bastawi
Abstract:
Three varieties of potatoes, namely Agria, Alpha and Diamant were evaluated for their suitability for industrial production of French fries. The evaluation was under taken after testing quality parameters of specific gravity, dry matter, peeling ratio, and defect after frying and panel test. The variety Agria ranked the best followed by Alpha with regard to the parameters tested. On the other hand, Diamant showed significantly higher defect percentage than the other cultivars. Also, it was significantly judged of low acceptance by panelists.Keywords: cultivars, crisps, French fries
Procedia PDF Downloads 26137640 Providing Security to Private Cloud Using Advanced Encryption Standard Algorithm
Authors: Annapureddy Srikant Reddy, Atthanti Mahendra, Samala Chinni Krishna, N. Neelima
Abstract:
In our present world, we are generating a lot of data and we, need a specific device to store all these data. Generally, we store data in pen drives, hard drives, etc. Sometimes we may loss the data due to the corruption of devices. To overcome all these issues, we implemented a cloud space for storing the data, and it provides more security to the data. We can access the data with just using the internet from anywhere in the world. We implemented all these with the java using Net beans IDE. Once user uploads the data, he does not have any rights to change the data. Users uploaded files are stored in the cloud with the file name as system time and the directory will be created with some random words. Cloud accepts the data only if the size of the file is less than 2MB.Keywords: cloud space, AES, FTP, NetBeans IDE
Procedia PDF Downloads 20637639 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 13937638 Short-Term Load Forecasting Based on Variational Mode Decomposition and Least Square Support Vector Machine
Authors: Jiangyong Liu, Xiangxiang Xu, Bote Luo, Xiaoxue Luo, Jiang Zhu, Lingzhi Yi
Abstract:
To address the problems of non-linearity and high randomness of the original power load sequence causing the degradation of power load forecasting accuracy, a short-term load forecasting method is proposed. The method is based on the Least Square Support Vector Machine optimized by an Improved Sparrow Search Algorithm combined with the Variational Mode Decomposition proposed in this paper. The application of the variational mode decomposition technique decomposes the raw power load data into a series of Intrinsic Mode Functions components, which can reduce the complexity and instability of the raw data while overcoming modal confounding; the proposed improved sparrow search algorithm can solve the problem of difficult selection of learning parameters in the least Square Support Vector Machine. Finally, through comparison experiments, the results show that the method can effectively improve prediction accuracy.Keywords: load forecasting, variational mode decomposition, improved sparrow search algorithm, least square support vector machine
Procedia PDF Downloads 10837637 Effectiveness of Balloon Angioplasty and Stent Angioplasty: Wound Healing in Critically Limb Ischemic
Authors: M. Wisnu Pamungkas, Patrianef Darwis
Abstract:
Introduction: Critical limb ischemia (CLI) is a vascular disease that has a significant amputation and mortality risk with diabetes mellitus, the most significant risk factor in CLI, is very common among Indonesian. Endovascular intervention (EVI) is preferred in treating CLI because it is noninvasive and effective. Balloon angioplasty and stent angioplasty are the most common method of EVI in Indonesia. This study aims to compare the effectiveness of balloon angioplasty and stent angioplasty on wound healing in CLI. Method: A cross-sectional study enrolled 90 subjects of CLI who underwent endovascular intervention using balloon angioplasty and stent angioplasty from January 2013 to July 2017 in dr. Cipto Mangunkusumo General Hospital, Jakarta. The wound healing period between balloon angioplasty dan stent angioplasty was analyzed using unpaired T-test with p<0,05 considered as statistically significant. Data of intervention method wound healing period, and subjects characteristic data (age, amputation, BMI, smoking habit, DM, occlusion site, and blood profile) were obtained. Result: The wound healing period in balloon angioplasty and stent angioplasty distributed normally. Mean value of wound healing period in balloon angioplasty and stent angioplasty are 84,8+/-2,423 and 59,93 +/- 2,423 days with a mean difference of 25 days. The difference in wound healing period in both groups is statically significant (p<0,05). The amputation event in balloon angioplasty and stent angioplasty is 22 and 16 event with no difference statistically. Conclusion: Stent angioplasty is a better method than balloon angioplasty for wound healing in patients with CLI.Keywords: critical limb ischemia, endovascular intervention, wound healing, angioplasty
Procedia PDF Downloads 12637636 Building Transparent Supply Chains through Digital Tracing
Authors: Penina Orenstein
Abstract:
In today’s world, particularly with COVID-19 a constant worldwide threat, organizations need greater visibility over their supply chains more than ever before, in order to find areas for improvement and greater efficiency, reduce the chances of disruption and stay competitive. The concept of supply chain mapping is one where every process and route is mapped in detail between each vendor and supplier. The simplest method of mapping involves sourcing publicly available data including news and financial information concerning relationships between suppliers. An additional layer of information would be disclosed by large, direct suppliers about their production and logistics sites. While this method has the advantage of not requiring any input from suppliers, it also doesn’t allow for much transparency beyond the first supplier tier and may generate irrelevant data—noise—that must be filtered out to find the actionable data. The primary goal of this research is to build data maps of supply chains by focusing on a layered approach. Using these maps, the secondary goal is to address the question as to whether the supply chain is re-engineered to make improvements, for example, to lower the carbon footprint. Using a drill-down approach, the end result is a comprehensive map detailing the linkages between tier-one, tier-two, and tier-three suppliers super-imposed on a geographical map. The driving force behind this idea is to be able to trace individual parts to the exact site where they’re manufactured. In this way, companies can ensure sustainability practices from the production of raw materials through the finished goods. The approach allows companies to identify and anticipate vulnerabilities in their supply chain. It unlocks predictive analytics capabilities and enables them to act proactively. The research is particularly compelling because it unites network science theory with empirical data and presents the results in a visual, intuitive manner.Keywords: data mining, supply chain, empirical research, data mapping
Procedia PDF Downloads 17537635 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows
Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid
Abstract:
Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil
Procedia PDF Downloads 13037634 Event Driven Dynamic Clustering and Data Aggregation in Wireless Sensor Network
Authors: Ashok V. Sutagundar, Sunilkumar S. Manvi
Abstract:
Energy, delay and bandwidth are the prime issues of wireless sensor network (WSN). Energy usage optimization and efficient bandwidth utilization are important issues in WSN. Event triggered data aggregation facilitates such optimal tasks for event affected area in WSN. Reliable delivery of the critical information to sink node is also a major challenge of WSN. To tackle these issues, we propose an event driven dynamic clustering and data aggregation scheme for WSN that enhances the life time of the network by minimizing redundant data transmission. The proposed scheme operates as follows: (1) Whenever the event is triggered, event triggered node selects the cluster head. (2) Cluster head gathers data from sensor nodes within the cluster. (3) Cluster head node identifies and classifies the events out of the collected data using Bayesian classifier. (4) Aggregation of data is done using statistical method. (5) Cluster head discovers the paths to the sink node using residual energy, path distance and bandwidth. (6) If the aggregated data is critical, cluster head sends the aggregated data over the multipath for reliable data communication. (7) Otherwise aggregated data is transmitted towards sink node over the single path which is having the more bandwidth and residual energy. The performance of the scheme is validated for various WSN scenarios to evaluate the effectiveness of the proposed approach in terms of aggregation time, cluster formation time and energy consumed for aggregation.Keywords: wireless sensor network, dynamic clustering, data aggregation, wireless communication
Procedia PDF Downloads 45137633 Mapping of Arenga Pinnata Tree Using Remote Sensing
Authors: Zulkiflee Abd Latif, Sitinor Atikah Nordin, Alawi Sulaiman
Abstract:
Different tree species possess different and various benefits. Arenga Pinnata tree species own several potential uses that is valuable for the economy and the country. Mapping vegetation using remote sensing technique involves various process, techniques and consideration. Using satellite imagery, this method enables the access of inaccessible area and with the availability of near infra-red band; it is useful in vegetation analysis, especially in identifying tree species. Pixel-based and object-based classification technique is used as a method in this study. Pixel-based classification technique used in this study divided into unsupervised and supervised classification. Object based classification technique becomes more popular another alternative method in classification process. Using spectral, texture, color and other information, to classify the target make object-based classification is a promising technique for classification. Classification of Arenga Pinnata trees is overlaid with elevation, slope and aspect, soil and river data and several other data to give information regarding the tree character and living environment. This paper will present the utilization of remote sensing technique in order to map Arenga Pinnata tree speciesKeywords: Arenga Pinnata, pixel-based classification, object-based classification, remote sensing
Procedia PDF Downloads 38037632 Analyses of Defects in Flexible Silicon Photovoltaic Modules via Thermal Imaging and Electroluminescence
Authors: S. Maleczek, K. Drabczyk, L. Bogdan, A. Iwan
Abstract:
It is known that for industrial applications using solar panel constructed from silicon solar cells require high-efficiency performance. One of the main problems in solar panels is different mechanical and structural defects, causing the decrease of generated power. To analyse defects in solar cells, various techniques are used. However, the thermal imaging is fast and simple method for locating defects. The main goal of this work was to analyze defects in constructed flexible silicon photovoltaic modules via thermal imaging and electroluminescence method. This work is realized for the GEKON project (No. GEKON2/O4/268473/23/2016) sponsored by The National Centre for Research and Development and The National Fund for Environmental Protection and Water Management. Thermal behavior was observed using thermographic camera (VIGOcam v50, VIGO System S.A, Poland) using a DC conventional source. Electroluminescence was observed by Steinbeis Center Photovoltaics (Stuttgart, Germany) equipped with a camera, in which there is a Si-CCD, 16 Mpix detector Kodak KAF-16803type. The camera has a typical spectral response in the range 350 - 1100 nm with a maximum QE of 60 % at 550 nm. In our work commercial silicon solar cells with the size 156 × 156 mm were cut for nine parts (called single solar cells) and used to create photovoltaic modules with the size of 160 × 70 cm (containing about 80 single solar cells). Flexible silicon photovoltaic modules on polyamides or polyester fabric were constructed and investigated taking into consideration anomalies on the surface of modules. Thermal imaging provided evidence of visible voltage-activated conduction. In electro-luminescence images, two regions are noticeable: darker, where solar cell is inactive and brighter corresponding with correctly working photovoltaic cells. The electroluminescence method is non-destructive and gives greater resolution of images thereby allowing a more precise evaluation of microcracks of solar cell after lamination process. Our study showed good correlations between defects observed by thermal imaging and electroluminescence. Finally, we can conclude that the thermographic examination of large scale photovoltaic modules allows us the fast, simple and inexpensive localization of defects at the single solar cells and modules. Moreover, thermographic camera was also useful to detection electrical interconnection between single solar cells.Keywords: electro-luminescence, flexible devices, silicon solar cells, thermal imaging
Procedia PDF Downloads 31637631 Remote Sensing and GIS Integration for Paddy Production Estimation in Bali Province, Indonesia
Authors: Sarono, Hamim Zaky Hadibasyir, dan Ridho Kurniawan
Abstract:
Estimation of paddy production is one of the areas that can be examined using the techniques of remote sensing and geographic information systems (GIS) in the field of agriculture. The purpose of this research is to know the amount of the paddy production estimation and how remote sensing and geographic information systems (GIS) are able to perform analysis of paddy production estimation in Tegalallang and Payangan Sub district, Bali Province, Indonesia. The method used is the method of land suitability. This method associates a physical parameters which are to be embodied in the smallest unit of a mapping that represents a mapping unit in a particular field and connecting with its field productivity. Analysis of estimated production using standard land suitability from FAO using matching technique. The parameters used to create the land unit is slope (FAO), climate classification (Oldeman), landform (Prapto Suharsono), and soil type. Land use map consist of paddy and non paddy field information obtained from Geo-eye 1 imagery using visual interpretation technique. Landsat image of the Data used for the interpretation of the landform, the classification of the slopes obtained from high point identification with method of interpolation spline, whereas climate data, soil, use secondary data originating from institutions-related institutions. The results of this research indicate Tegallalang and Payangan Districts in known wetland suitability consists of S1 (very suitable) covering an area of 2884,7 ha with the productivity of 5 tons/ha and S2 (suitable) covering an area of 482,9 ha with the productivity of 3 tons/ha. The sum of paddy production estimation as a results in both districts are 31.744, 3 tons in one year.Keywords: production estimation, paddy, remote sensing, geography information system, land suitability
Procedia PDF Downloads 34237630 Business Intelligence for Profiling of Telecommunication Customer
Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro
Abstract:
Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.Keywords: business intelligence, customer segmentation, data warehouse, data mining
Procedia PDF Downloads 48437629 Stating Best Commercialization Method: An Unanswered Question from Scholars and Practitioners
Authors: Saheed A. Gbadegeshin
Abstract:
Commercialization method is a means to make inventions available at the market for final consumption. It is described as an important tool for keeping business enterprises sustainable and improving national economic growth. Thus, there are several scholarly publications on it, either presenting or testing different methods for commercialization. However, young entrepreneurs, technologists and scientists would like to know the best method to commercialize their innovations. Then, this question arises: What is the best commercialization method? To answer the question, a systematic literature review was conducted, and practitioners were interviewed. The literary results revealed that there are many methods but new methods are needed to improve commercialization especially during these times of economic crisis and political uncertainty. Similarly, the empirical results showed there are several methods, but the best method is the one that reduces costs, reduces the risks associated with uncertainty, and improves customer participation and acceptability. Therefore, it was concluded that new commercialization method is essential for today's high technologies and a method was presented.Keywords: commercialization method, technology, knowledge, intellectual property, innovation, invention
Procedia PDF Downloads 34237628 Developments in corporate governance and economic growth in Sub Saharan Africa
Authors: Martha Matashu
Abstract:
This study examined corporate governance and economic growth trends in Sub Saharan African (SSA) countries. The need for corporate governance arise from the fact that the day to day running of the business is done by management who in accordance with the neoclassical theory and agency theory have inborn tendencies to use the resources of the company to their advantage. This prevails against a background where the endogenous economic growth theory hold the assumption that economic growth is an outcome of the overall performance of all companies within an economy. This suggest that corporate governance at firm level determine economic growth through its impact on the overall performance. Nevertheless, insight into literature suggest that efforts to promote corporate governance in countries across SSA since the 1980s to date have not yet yielded desired outcomes. The board responsibilities, shareholder rights, disclosure and transparency, protection of minority shareholder, and liability of directors were thus used as proxies of corporate governance because these are believed to be mechanisms that are believed to enhance company performance their effect on enhancing accountability and transparency. Using panel data techniques, corporate governance and economic growth data for 29 SSA countries from the period of 2008 to 2019 was analysed. The findings revealed declining economic growth trend despite an increase in corporate governance aspects such as director liability, shareholders’ rights, and protection of minority shareholder in SSA countries. These findings are in contradiction to the popularly held theoretical principles of economic growth and corporate governance. The study reached the conclusion thata nonlinearrelationship exists between corporate governance and economic growth within the selectedSSA countries during the period under investigation. This study thus recommends that measures should be taken to create conditions for corporate governance that would bolster significant positive contributions to economic growth in the region.Keywords: corporate governance, economic growth, sub saharan Africa, agency theory, endogenous theory
Procedia PDF Downloads 14937627 Assessment of Rainfall Erosivity, Comparison among Methods: Case of Kakheti, Georgia
Authors: Mariam Tsitsagi, Ana Berdzenishvili
Abstract:
Rainfall intensity change is one of the main indicators of climate change. It has a great influence on agriculture as one of the main factors causing soil erosion. Splash and sheet erosion are one of the most prevalence and harmful for agriculture. It is invisible for an eye at first stage, but the process will gradually move to stream cutting erosion. Our study provides the assessment of rainfall erosivity potential with the use of modern research methods in Kakheti region. The region is the major provider of wheat and wine in the country. Kakheti is located in the eastern part of Georgia and characterized quite a variety of natural conditions. The climate is dry subtropical. For assessment of the exact rate of rainfall erosion potential several year data of rainfall with short intervals are needed. Unfortunately, from 250 active metro stations running during the Soviet period only 55 of them are active now and 5 stations in Kakheti region respectively. Since 1936 we had data on rainfall intensity in this region, and rainfall erosive potential is assessed, in some old papers, but since 1990 we have no data about this factor, which in turn is a necessary parameter for determining the rainfall erosivity potential. On the other hand, researchers and local communities suppose that rainfall intensity has been changing and the number of haily days has also been increasing. However, finding a method that will allow us to determine rainfall erosivity potential as accurate as possible in Kakheti region is very important. The study period was divided into three sections: 1936-1963; 1963-1990 and 1990-2015. Rainfall erosivity potential was determined by the scientific literature and old meteorological stations’ data for the first two periods. And it is known that in eastern Georgia, at the boundary between steppe and forest zones, rainfall erosivity in 1963-1990 was 20-75% higher than that in 1936-1963. As for the third period (1990-2015), for which we do not have data of rainfall intensity. There are a variety of studies, where alternative ways of calculating the rainfall erosivity potential based on lack of data are discussed e.g.based on daily rainfall data, average annual rainfall data and the elevation of the area, etc. It should be noted that these methods give us a totally different results in case of different climatic conditions and sometimes huge errors in some cases. Three of the most common methods were selected for our research. Each of them was tested for the first two sections of the study period. According to the outcomes more suitable method for regional climatic conditions was selected, and after that, we determined rainfall erosivity potential for the third section of our study period with use of the most successful method. Outcome data like attribute tables and graphs was specially linked to the database of Kakheti, and appropriate thematic maps were created. The results allowed us to analyze the rainfall erosivity potential changes from 1936 to the present and make the future prospect. We have successfully implemented a method which can also be use for some another region of Georgia.Keywords: erosivity potential, Georgia, GIS, Kakheti, rainfall
Procedia PDF Downloads 22437626 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 39737625 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments
Authors: Rahul Paul, Peter Mctaggart, Luke Skinner
Abstract:
Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry
Procedia PDF Downloads 9937624 Data Driven Infrastructure Planning for Offshore Wind farms
Authors: Isha Saxena, Behzad Kazemtabrizi, Matthias C. M. Troffaes, Christopher Crabtree
Abstract:
The calculations done at the beginning of the life of a wind farm are rarely reliable, which makes it important to conduct research and study the failure and repair rates of the wind turbines under various conditions. This miscalculation happens because the current models make a simplifying assumption that the failure/repair rate remains constant over time. This means that the reliability function is exponential in nature. This research aims to create a more accurate model using sensory data and a data-driven approach. The data cleaning and data processing is done by comparing the Power Curve data of the wind turbines with SCADA data. This is then converted to times to repair and times to failure timeseries data. Several different mathematical functions are fitted to the times to failure and times to repair data of the wind turbine components using Maximum Likelihood Estimation and the Posterior expectation method for Bayesian Parameter Estimation. Initial results indicate that two parameter Weibull function and exponential function produce almost identical results. Further analysis is being done using the complex system analysis considering the failures of each electrical and mechanical component of the wind turbine. The aim of this project is to perform a more accurate reliability analysis that can be helpful for the engineers to schedule maintenance and repairs to decrease the downtime of the turbine.Keywords: reliability, bayesian parameter inference, maximum likelihood estimation, weibull function, SCADA data
Procedia PDF Downloads 8637623 Analysis of Sediment Distribution around Karang Sela Coral Reef Using Multibeam Backscatter
Authors: Razak Zakariya, Fazliana Mustajap, Lenny Sharinee Sakai
Abstract:
A sediment map is quite important in the marine environment. The sediment itself contains thousands of information that can be used for other research. This study was conducted by using a multibeam echo sounder Reson T20 on 15 August 2020 at the Karang Sela (coral reef area) at Pulau Bidong. The study aims to identify the sediment type around the coral reef by using bathymetry and backscatter data. The sediment in the study area was collected as ground truthing data to verify the classification of the seabed. A dry sieving method was used to analyze the sediment sample by using a sieve shaker. PDS 2000 software was used for data acquisition, and Qimera QPS version 2.4.5 was used for processing the bathymetry data. Meanwhile, FMGT QPS version 7.10 processes the backscatter data. Then, backscatter data were analyzed by using the maximum likelihood classification tool in ArcGIS version 10.8 software. The result identified three types of sediments around the coral which were very coarse sand, coarse sand, and medium sand.Keywords: sediment type, MBES echo sounder, backscatter, ArcGIS
Procedia PDF Downloads 8637622 Comparison of Agree Method and Shortest Path Method for Determining the Flow Direction in Basin Morphometric Analysis: Case Study of Lower Tapi Basin, Western India
Authors: Jaypalsinh Parmar, Pintu Nakrani, Bhaumik Shah
Abstract:
Digital Elevation Model (DEM) is elevation data of the virtual grid on the ground. DEM can be used in application in GIS such as hydrological modelling, flood forecasting, morphometrical analysis and surveying etc.. For morphometrical analysis the stream flow network plays a very important role. DEM lacks accuracy and cannot match field data as it should for accurate results of morphometrical analysis. The present study focuses on comparing the Agree method and the conventional Shortest path method for finding out morphometric parameters in the flat region of the Lower Tapi Basin which is located in the western India. For the present study, open source SRTM (Shuttle Radar Topography Mission with 1 arc resolution) and toposheets issued by Survey of India (SOI) were used to determine the morphometric linear aspect such as stream order, number of stream, stream length, bifurcation ratio, mean stream length, mean bifurcation ratio, stream length ratio, length of overland flow, constant of channel maintenance and aerial aspect such as drainage density, stream frequency, drainage texture, form factor, circularity ratio, elongation ratio, shape factor and relief aspect such as relief ratio, gradient ratio and basin relief for 53 catchments of Lower Tapi Basin. Stream network was digitized from the available toposheets. Agree DEM was created by using the SRTM and stream network from the toposheets. The results obtained were used to demonstrate a comparison between the two methods in the flat areas.Keywords: agree method, morphometric analysis, lower Tapi basin, shortest path method
Procedia PDF Downloads 23937621 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 9737620 Diagnostic Clinical Skills in Cardiology: Improving Learning and Performance with Hybrid Simulation, Scripted Histories, Wearable Technology, and Quantitative Grading – The Assimilate Excellence Study
Authors: Daly M. J, Condron C, Mulhall C, Eppich W, O'Neill J.
Abstract:
Introduction: In contemporary clinical cardiology, comprehensive and holistic bedside evaluation including accurate cardiac auscultation is in decline despite having positive effects on patients and their outcomes. Methods: Scripted histories and scoring checklists for three clinical scenarios in cardiology were co-created and refined through iterative consensus by a panel of clinical experts; these were then paired with recordings of auscultatory findings from three actual patients with known valvular heart disease. A wearable vest with embedded pressure-sensitive panel speakers was developed to transmit these recordings when examined at the standard auscultation points. RCSI medical students volunteered for a series of three formative long case examinations in cardiology (LC1 – LC3) using this hybrid simulation. Participants were randomised into two groups: Group 1 received individual teaching from an expert trainer between LC1 and LC2; Group 2 received the same intervention between LC2 and LC3. Each participant’s long case examination performance was recorded and blindly scored by two peer participants and two RCSI examiners. Results: Sixty-eight participants were included in the study (age 27.6 ± 0.1 years; 74% female) and randomised into two groups; there were no significant differences in baseline characteristics between groups. Overall, the median total faculty examiner score was 39.8% (35.8 – 44.6%) in LC1 and increased to 63.3% (56.9 – 66.4%) in LC3, with those in Group 1 showing a greater improvement in LC2 total score than that observed in Group 2 (p < .001). Using the novel checklist, intraclass correlation coefficients (ICC) were excellent between examiners in all cases: ICC .994 – .997 (p < .001); correlation between peers and examiners improved in LC2 following peer grading of LC1 performances: ICC .857 – .867 (p < .001). Conclusion: Hybrid simulation and quantitative grading improve learning, standardisation of assessment, and direct comparisons of both performance and acumen in clinical cardiology.Keywords: cardiology, clinical skills, long case examination, hybrid simulation, checklist
Procedia PDF Downloads 11037619 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence
Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno
Abstract:
Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index
Procedia PDF Downloads 16837618 Critical Comparison of Two Teaching Methods: The Grammar Translation Method and the Communicative Teaching Method
Authors: Aicha Zohbie
Abstract:
The purpose of this paper is to critically compare two teaching methods: the communicative method and the grammar-translation method. The paper presents the importance of language awareness as an approach to teaching and learning language and some challenges that language teachers face. In addition, the paper strives to determine whether the adoption of communicative teaching methods or the grammar teaching method would be more effective to teach a language. A variety of features are considered for comparing the two methods: the purpose of each method, techniques used, teachers’ and students’ roles, the use of L1, the skills that are emphasized, the correction of students’ errors, and the students’ assessments. Finally, the paper includes suggestions and recommendations for implementing an approach that best meets the students’ needs in a classroom.Keywords: language teaching methods, language awareness, communicative method grammar translation method, advantages and disadvantages
Procedia PDF Downloads 15137617 Comparative Analysis of Effect of Capital Structure to Profitability in Manufacturing Sector in Indonesia and Malaysia in 2009 - 2014
Authors: Hatane Semuel, Hartmann H. Ngono, Sautma R. Basana
Abstract:
The effect of capital structure on profitability is often debated by many financial investigators. The application of the trade-off theory and pecking order theory to analyze this relationship may generate different views. Each company has its own strategies to achieve its objectives and the external environment, such as state policy has a broad impact on the relationship with the capital structure of the company's profitability. Malaysia is the country closest to Indonesia that had a similar growth rate of GDP and industrial production with Indonesia, but Malaysia has lower inflation rate than Indonesia. This study was conducted to compare the performance of manufacturing sector between two countries when entering the era of the ASEAN Economic Community (AEC). The samples for this study were 69 companies in Indonesia and 242 companies in Malaysia that engaged in manufacturing sector. The study uses panel data analysis. The study found that the capital structure have positive effect on profitability of manufacturing company in Indonesia, and it turns to negative effect on manufacturing companies in Malaysia. The results also showed that there are significant differences in short-term debt towards profitability of manufacturing companies in the two countries Indonesia and Malaysia.Keywords: capital structure, Indonesia, Malaysia, manufacturing, profitability
Procedia PDF Downloads 38237616 Heavy Vehicle Traffic Estimation Using Automatic Traffic Recorders/Weigh-In-Motion Data: Current Practice and Proposed Methods
Authors: Muhammad Faizan Rehman Qureshi, Ahmed Al-Kaisy
Abstract:
Accurate estimation of traffic loads is critical for pavement and bridge design, among other transportation applications. Given the disproportional impact of heavier axle loads on pavement and bridge structures, truck and heavy vehicle traffic is expected to be a major determinant of traffic load estimation. Further, heavy vehicle traffic is also a major input in transportation planning and economic studies. The traditional method for estimating heavy vehicle traffic primarily relies on AADT estimation using Monthly Day of the Week (MDOW) adjustment factors as well as the percent heavy vehicles observed using statewide data collection programs. The MDOW factors are developed using daily and seasonal (or monthly) variation patterns for total traffic, consisting predominantly of passenger cars and other smaller vehicles. Therefore, while using these factors may yield reasonable estimates for total traffic (AADT), such estimates may involve a great deal of approximation when applied to heavy vehicle traffic. This research aims at assessing the approximation involved in estimating heavy vehicle traffic using MDOW adjustment factors for total traffic (conventional approach) along with three other methods of using MDOW adjustment factors for total trucks (class 5-13), combination-unit trucks (class 8-13), as well as adjustment factors for each vehicle class separately. Results clearly indicate that the conventional method was outperformed by the other three methods by a large margin. Further, using the most detailed and data intensive method (class-specific adjustment factors) does not necessarily yield a more accurate estimation of heavy vehicle traffic.Keywords: traffic loads, heavy vehicles, truck traffic, adjustment factors, traffic data collection
Procedia PDF Downloads 2337615 A Self Organized Map Method to Classify Auditory-Color Synesthesia from Frontal Lobe Brain Blood Volume
Authors: Takashi Kaburagi, Takamasa Komura, Yosuke Kurihara
Abstract:
Absolute pitch is the ability to identify a musical note without a reference tone. Training for absolute pitch often occurs in preschool education. It is necessary to clarify how well the trainee can make use of synesthesia in order to evaluate the effect of the training. To the best of our knowledge, there are no existing methods for objectively confirming whether the subject is using synesthesia. Therefore, in this study, we present a method to distinguish the use of color-auditory synesthesia from the separate use of color and audition during absolute pitch training. This method measures blood volume in the prefrontal cortex using functional Near-infrared spectroscopy (fNIRS) and assumes that the cognitive step has two parts, a non-linear step and a linear step. For the linear step, we assume a second order ordinary differential equation. For the non-linear part, it is extremely difficult, if not impossible, to create an inverse filter of such a complex system as the brain. Therefore, we apply a method based on a self-organizing map (SOM) and are guided by the available data. The presented method was tested using 15 subjects, and the estimation accuracy is reported.Keywords: absolute pitch, functional near-infrared spectroscopy, prefrontal cortex, synesthesia
Procedia PDF Downloads 26337614 Analysis of Dynamics Underlying the Observation Time Series by Using a Singular Spectrum Approach
Authors: O. Delage, H. Bencherif, T. Portafaix, A. Bourdier
Abstract:
The main purpose of time series analysis is to learn about the dynamics behind some time ordered measurement data. Two approaches are used in the literature to get a better knowledge of the dynamics contained in observation data sequences. The first of these approaches concerns time series decomposition, which is an important analysis step allowing patterns and behaviors to be extracted as components providing insight into the mechanisms producing the time series. As in many cases, time series are short, noisy, and non-stationary. To provide components which are physically meaningful, methods such as Empirical Mode Decomposition (EMD), Empirical Wavelet Transform (EWT) or, more recently, Empirical Adaptive Wavelet Decomposition (EAWD) have been proposed. The second approach is to reconstruct the dynamics underlying the time series as a trajectory in state space by mapping a time series into a set of Rᵐ lag vectors by using the method of delays (MOD). Takens has proved that the trajectory obtained with the MOD technic is equivalent to the trajectory representing the dynamics behind the original time series. This work introduces the singular spectrum decomposition (SSD), which is a new adaptive method for decomposing non-linear and non-stationary time series in narrow-banded components. This method takes its origin from singular spectrum analysis (SSA), a nonparametric spectral estimation method used for the analysis and prediction of time series. As the first step of SSD is to constitute a trajectory matrix by embedding a one-dimensional time series into a set of lagged vectors, SSD can also be seen as a reconstruction method like MOD. We will first give a brief overview of the existing decomposition methods (EMD-EWT-EAWD). The SSD method will then be described in detail and applied to experimental time series of observations resulting from total columns of ozone measurements. The results obtained will be compared with those provided by the previously mentioned decomposition methods. We will also compare the reconstruction qualities of the observed dynamics obtained from the SSD and MOD methods.Keywords: time series analysis, adaptive time series decomposition, wavelet, phase space reconstruction, singular spectrum analysis
Procedia PDF Downloads 10437613 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 145