Search results for: minimum data set
23945 Physiochemical and Antibacterial Assessment of Iranian Propolis Gathering in Qazvin Province
Authors: Nematollah Gheibi, Nader Divan Khosroshahi, Mahdi Mohammadi Ghanbarlou
Abstract:
Introduction: Nowadays, the phenomenon of bacterial resistance is one of the most important challenge of the health community in the world. Propolis is most important production of bee colonies that collected from of various plants. So far, a lot of investigations carried out about its antibacterial effects. Material and methods: Thirty gram of propolis prepared as ethanolic extract and after different process of purification, 7.5 gr of its pure form were obtained. Propolis compounds identification was performed by TLC and VLC methods. The HPLC spectrum obtaining from propolis ethanolic extract was compared with some purified standard phenolic and flavonoid substances. Antibacterial effects of ethanol extract of purified propolis were evaluated on two strains of Staphylococcus aureus and Pseudomonas aeruginosa and their MIC was determined by the microdillution assay. Results: Ethanolic propolis extraction analyzed by TLC were resulted to confirm several phenolic and flavonoid compounds in this extract and some of the confirmed by HPLC technique. Minimum inhibitory concentration (MIC) for standard Staphylococcus aureus (ATCC25923) and Pseudomonas aeruginosa (ATCC27853) strains were obtained 2.5 mg/ml and 50 mg/ml respectively. Conclusion: Bee Propolis is a mix organic compound that has a lot of beneficial effects such as anti-bacterial that emphasized in this investigation. It is proposed as a rich source of natural phenolic and flavonoids compounds in designing of new biological resources for hygienic and medical applications.Keywords: propolis, Staphylococcus aureus, Pseudomonas aeruginosa, antibacterial
Procedia PDF Downloads 30523944 Non-Parametric Regression over Its Parametric Couterparts with Large Sample Size
Authors: Jude Opara, Esemokumo Perewarebo Akpos
Abstract:
This paper is on non-parametric linear regression over its parametric counterparts with large sample size. Data set on anthropometric measurement of primary school pupils was taken for the analysis. The study used 50 randomly selected pupils for the study. The set of data was subjected to normality test, and it was discovered that the residuals are not normally distributed (i.e. they do not follow a Gaussian distribution) for the commonly used least squares regression method for fitting an equation into a set of (x,y)-data points using the Anderson-Darling technique. The algorithms for the nonparametric Theil’s regression are stated in this paper as well as its parametric OLS counterpart. The use of a programming language software known as “R Development” was used in this paper. From the analysis, the result showed that there exists a significant relationship between the response and the explanatory variable for both the parametric and non-parametric regression. To know the efficiency of one method over the other, the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) are used, and it is discovered that the nonparametric regression performs better than its parametric regression counterparts due to their lower values in both the AIC and BIC. The study however recommends that future researchers should study a similar work by examining the presence of outliers in the data set, and probably expunge it if detected and re-analyze to compare results.Keywords: Theil’s regression, Bayesian information criterion, Akaike information criterion, OLS
Procedia PDF Downloads 30523943 Improving the Performance of Requisition Document Online System for Royal Thai Army by Using Time Series Model
Authors: D. Prangchumpol
Abstract:
This research presents a forecasting method of requisition document demands for Military units by using Exponential Smoothing methods to analyze data. The data used in the forecast is an actual data requisition document of The Adjutant General Department. The results of the forecasting model to forecast the requisition of the document found that Holt–Winters’ trend and seasonality method of α=0.1, β=0, γ=0 is appropriate and matches for requisition of documents. In addition, the researcher has developed a requisition online system to improve the performance of requisition documents of The Adjutant General Department, and also ensuring that the operation can be checked.Keywords: requisition, holt–winters, time series, royal thai army
Procedia PDF Downloads 30823942 Geoelectric Survey for Groundwater Potential in Waziri Umaru Federal Polytechnic, Birnin Kebbi, Nigeria
Authors: Ibrahim Mohammed, Suleiman Taofiq, Muhammad Naziru Yahya
Abstract:
Geoelectrical measurements using Schlumberger Vertical Electrical Sounding (VES) method were carried out in Waziri Umaru Federal Polytechnic, Birnin Kebbi, Nigeria, with the aim of determining the groundwater potential in the area. Twelve (12) Vertical Electric Sounding (VES) data were collected using Terrameter (ABEM SAS 300c) and analyzed using computer software (IPI2win), which gives an automatic interpretation of the apparent resistivity. The results of the interpretation of VES data were used in the characterization of three to five geo-electric layers from which the aquifer units were delineated. Data analysis indicated that water bearing formation exists in the third and fourth layers having resistivity range of 312 to 767 Ωm and 9.51 to 681 Ωm, respectively. The thickness of the formation ranges from 14.7 to 41.8 m, while the depth is from 8.22 to 53.7 m. Based on the result obtained from the interpretation of the data, five (5) VES stations were recommended as the most viable locations for groundwater exploration in the study area. The VES stations include VES A4, A5, A6, B1, and B2. The VES results of the entire area indicated that the water bearing formation occurs at maximum depth of 53.7 m at the time of this survey.Keywords: aquifer, depth, groundwater, resistivity, Schlumberger
Procedia PDF Downloads 16623941 The Integration of Patient Health Record Generated from Wearable and Internet of Things Devices into Health Information Exchanges
Authors: Dalvin D. Hill, Hector M. Castro Garcia
Abstract:
A growing number of individuals utilize wearable devices on a daily basis. The usage and functionality of these wearable devices vary from user to user. One popular usage of said devices is to track health-related activities that are typically stored on a device’s memory or uploaded to an account in the cloud; based on the current trend, the data accumulated from the wearable device are stored in a standalone location. In many of these cases, this health related datum is not a factor when considering the holistic view of a user’s health lifestyle or record. This health-related data generated from wearable and Internet of Things (IoT) devices can serve as empirical information to a medical provider, as the standalone data can add value to the holistic health record of a patient. This paper proposes a solution to incorporate the data gathered from these wearable and IoT devices, with that a patient’s Personal Health Record (PHR) stored within the confines of a Health Information Exchange (HIE).Keywords: electronic health record, health information exchanges, internet of things, personal health records, wearable devices, wearables
Procedia PDF Downloads 12823940 System Identification in Presence of Outliers
Authors: Chao Yu, Qing-Guo Wang, Dan Zhang
Abstract:
The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising
Procedia PDF Downloads 30723939 Defining a Reference Architecture for Predictive Maintenance Systems: A Case Study Using the Microsoft Azure IoT-Cloud Components
Authors: Walter Bernhofer, Peter Haber, Tobias Mayer, Manfred Mayr, Markus Ziegler
Abstract:
Current preventive maintenance measures are cost intensive and not efficient. With the available sensor data of state of the art internet of things devices new possibilities of automated data processing emerge. Current advances in data science and in machine learning enable new, so called predictive maintenance technologies, which empower data scientists to forecast possible system failures. The goal of this approach is to cut expenses in preventive maintenance by automating the detection of possible failures and to improve efficiency and quality of maintenance measures. Additionally, a centralization of the sensor data monitoring can be achieved by using this approach. This paper describes the approach of three students to define a reference architecture for a predictive maintenance solution in the internet of things domain with a connected smartphone app for service technicians. The reference architecture is validated by a case study. The case study is implemented with current Microsoft Azure cloud technologies. The results of the case study show that the reference architecture is valid and can be used to achieve a system for predictive maintenance execution with the cloud components of Microsoft Azure. The used concepts are technology platform agnostic and can be reused in many different cloud platforms. The reference architecture is valid and can be used in many use cases, like gas station maintenance, elevator maintenance and many more.Keywords: case study, internet of things, predictive maintenance, reference architecture
Procedia PDF Downloads 25223938 Inhibition of Sea Urchin and Starfish Embryonic Development by Hexane Extracts from Five Philippine Marine Sponges
Authors: Chona Gelani, Mylene Uy, Keisuke Yasuda, Emi Ohta, Shinji Ohta
Abstract:
The marine environment is undoubtedly a rich source of diverse organisms that possess bioactive secondary metabolites with important pharmacological activities. Marine sponges have since been contributing a wide array of compounds of biomedical and pharmaceutical importance. This study is an attempt to contribute to the growing and advancing marine natural products research. It aims to evaluate the cytotoxicity of the hexane extract (H) from the Philippine marine sponges, Rhabdastrella globostellata (Rg), Callyspongia sp. (Calsp), Callyspongia aerizusa (Ca), Carteriospongia sp. (Carsp), and Cinachyrella sp. (Cisp) using the eggs of starfish, Asterina pectinifera, and sea urchin, Hemicentrotus pulcherrimus. Specifically, the cytotoxicity of the marine sponge hexane extract was determined through its inhibition of starfish and sea urchin embryonic development. After 24 hours, CarspH and RgH inhibited early gastrulation of sea urchin at a minimum concentration of 15.63 and 31.25 μg/mL, respectively. CalspH inhibited the early gastrulation of both sea urchin and starfish at 125 μg/mL, whereas CaH halted the morula of sea urchin and early gastrulation of starfish at 250 μg/mL. CispH exhibited relatively weak inhibitory activity on starfish embryogenesis but inhibited the early gastrulation of sea urchin at 250 μg/mL. The results obtained from this study were used as basis for the separation, isolation and purification of the component(s) of the hexane extracts from the five Philippine marine sponges.Keywords: embryonic development, marine sponge cytotoxicity, Philippine marine sponges, sea urchin and starfish embryogenesis
Procedia PDF Downloads 28223937 Comparing Groundwater Fluoride Level with WHO Guidelines and Classifying At-Risk Age Groups; Based on Health Risk Assessment
Authors: Samaneh Abolli, Kamyar Yaghmaeian, Ali Arab Aradani, Mahmood Alimohammadi
Abstract:
The main route of fluoride uptake is drinking water. Fluoride absorption in the acceptable range (0.5-1.5 mg L-¹) is suitable for the body, but it's too much consumption can have irreversible health effects. To compare fluoride concentration with the WHO guidelines, 112 water samples were taken from groundwater aquifers in 22 villages of Garmsar County, the central part of Iran, during 2018 to 2019.Fluoride concentration was measured by the SPANDS method, and its non-carcinogenic impacts were calculated using EDI and HQ. The statistical population was divided into four categories of infant, children, teenagers, and adults. Linear regression and Spearman rank correlation coefficient tests were used to investigate the relationships between the well's depth and fluoride concentration in the water samples. The annual mean concentrations of fluoride in 2018 and2019 were 0.75 and 0.64 mg -¹ and, the fluoride mean concentration in the samples classifying the cold and hot seasons of the studied years was 0.709 and 0.689 mg L-¹, respectively. The amount of fluoride in 27% of the samples in both years was less than the acceptable minimum (0.5 mg L-¹). Also, 11% of the samples in2018 (6 samples) had fluoride levels higher than 1.5 mg L-¹. The HQ showed that the children were vulnerable; teenagers and adults were in the next ranks, respectively. Statistical tests showed a reverse and significant correlation (R2 = 0.02, < 0.0001) between well depth and fluoride content. The border between the usefulness/harmfulness of fluoride is very narrow and requires extensive studies.Keywords: fluoride, groundwater, health risk assessment, hazard quotient, Garmsar
Procedia PDF Downloads 7023936 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction
Authors: Yan Zhang
Abstract:
Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.Keywords: Internet of Things, machine learning, predictive maintenance, streaming data
Procedia PDF Downloads 38623935 Road Condition Monitoring Using Built-in Vehicle Technology Data, Drones, and Deep Learning
Authors: Judith Mwakalonge, Geophrey Mbatta, Saidi Siuhi, Gurcan Comert, Cuthbert Ruseruka
Abstract:
Transportation agencies worldwide continuously monitor their roads' conditions to minimize road maintenance costs and maintain public safety and rideability quality. Existing methods for carrying out road condition surveys involve manual observations of roads using standard survey forms done by qualified road condition surveyors or engineers either on foot or by vehicle. Automated road condition survey vehicles exist; however, they are very expensive since they require special vehicles equipped with sensors for data collection together with data processing and computing devices. The manual methods are expensive, time-consuming, infrequent, and can hardly provide real-time information for road conditions. This study contributes to this arena by utilizing built-in vehicle technologies, drones, and deep learning to automate road condition surveys while using low-cost technology. A single model is trained to capture flexible pavement distresses (Potholes, Rutting, Cracking, and raveling), thereby providing a more cost-effective and efficient road condition monitoring approach that can also provide real-time road conditions. Additionally, data fusion is employed to enhance the road condition assessment with data from vehicles and drones.Keywords: road conditions, built-in vehicle technology, deep learning, drones
Procedia PDF Downloads 12423934 Enhancing Student Learning Outcomes Using Engineering Design Process: Case Study in Physics Course
Authors: Thien Van Ngo
Abstract:
The engineering design process is a systematic approach to solving problems. It involves identifying a problem, brainstorming solutions, prototyping and testing solutions, and evaluating the results. The engineering design process can be used to teach students how to solve problems in a creative and innovative way. The research aim of this study was to investigate the effectiveness of using the engineering design process to enhance student learning outcomes in a physics course. A mixed research method was used in this study. The quantitative data were collected using a pretest-posttest control group design. The qualitative data were collected using semi-structured interviews. The sample was 150 first-year students in the Department of Mechanical Engineering Technology at Cao Thang Technical College in Vietnam in the 2022-2023 school year. The quantitative data were collected using a pretest-posttest control group design. The pretest was administered to both groups at the beginning of the study. The posttest was administered to both groups at the end of the study. The qualitative data were collected using semi-structured interviews with a sample of eight students in the experimental group. The interviews were conducted after the posttest. The quantitative data were analyzed using independent sample T-tests. The qualitative data were analyzed using thematic analysis. The quantitative data showed that students in the experimental group, who were taught using the engineering design process, had significantly higher post-test scores on physics problem-solving than students in the control group, who were taught using the conventional method. The qualitative data showed that students in the experimental group were more motivated and engaged in the learning process than students in the control group. Students in the experimental group also reported that they found the engineering design process to be a more effective way of learning physics. The findings of this study suggest that the engineering design process can be an effective way of enhancing student learning outcomes in physics courses. The engineering design process engages students in the learning process and helps them to develop problem-solving skills.Keywords: engineering design process, problem-solving, learning outcome of physics, students’ physics competencies, deep learning
Procedia PDF Downloads 6523933 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank
Authors: Jalal Haghighat Monfared, Zahra Akbari
Abstract:
Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.Keywords: business intelligence, business intelligence capability, decision making, decision quality
Procedia PDF Downloads 11223932 Modelling of Geotechnical Data Using Geographic Information System and MATLAB for Eastern Ahmedabad City, Gujarat
Authors: Rahul Patel
Abstract:
Ahmedabad, a city located in western India, is experiencing rapid growth due to urbanization and industrialization. It is projected to become a metropolitan city in the near future, resulting in various construction activities. Soil testing is necessary before construction can commence, requiring construction companies and contractors to periodically conduct soil testing. The focus of this study is on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical (Geo)-database involves three steps: collecting borehole data from reputable sources, verifying the accuracy and redundancy of the data, and standardizing and organizing the geotechnical information for integration into the database. Once the database is complete, it is integrated with GIS, allowing users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. This GIS map enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This study highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers.Keywords: ArcGIS, borehole data, geographic information system, geo-database, interpolation, SPT N-value, soil classification, Φ-Value, bearing capacity
Procedia PDF Downloads 7423931 Using TRACE and SNAP Codes to Establish the Model of Maanshan PWR for SBO Accident
Authors: B. R. Shen, J. R. Wang, J. H. Yang, S. W. Chen, C. Shih, Y. Chiang, Y. F. Chang, Y. H. Huang
Abstract:
In this research, TRACE code with the interface code-SNAP was used to simulate and analyze the SBO (station blackout) accident which occurred in Maanshan PWR (pressurized water reactor) nuclear power plant (NPP). There are four main steps in this research. First, the SBO accident data of Maanshan NPP were collected. Second, the TRACE/SNAP model of Maanshan NPP was established by using these data. Third, this TRACE/SNAP model was used to perform the simulation and analysis of SBO accident. Finally, the simulation and analysis of SBO with mitigation equipments was performed. The analysis results of TRACE are consistent with the data of Maanshan NPP. The mitigation equipments of Maanshan can maintain the safety of Maanshan in the SBO according to the TRACE predictions.Keywords: pressurized water reactor (PWR), TRACE, station blackout (SBO), Maanshan
Procedia PDF Downloads 19423930 A Comparative and Doctrinal Analysis towards the Investigation of a Right to Be Forgotten in Hong Kong
Authors: Jojo Y. C. Mo
Abstract:
Memories are good. They remind us of people, places and experiences that we cherish. But memories cannot be changed and there may well be memories that we do not want to remember. This is particularly true in relation to information which causes us embarrassment and humiliation or simply because it is private – we all want to erase or delete such information. This desire to delete is recently recognised by the Court of Justice of the European Union in the 2014 case of Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González in which the court ordered Google to remove links to some information about the complainant which he wished to be removed. This so-called ‘right to be forgotten’ received serious attention and significantly, the European Council and the European Parliament enacted the General Data Protection Regulation (GDPR) to provide a more structured and normative framework for implementation of right to be forgotten across the EU. This development in data protection laws will, undoubtedly, have significant impact on companies and co-operations not just within the EU but outside as well. Hong Kong, being one of the world’s leading financial and commercial center as well as one of the first jurisdictions in Asia to implement a comprehensive piece of data protection legislation, is therefore a jurisdiction that is worth looking into. This article/project aims to investigate the following: a) whether there is a right to be forgotten under the existing Hong Kong data protection legislation b) if not, whether such a provision is necessary and why. This article utilises a comparative methodology based on a study of primary and secondary resources, including scholarly articles, government and law commission reports and working papers and relevant international treaties, constitutional documents, case law and legislation. The author will primarily engage literature and case-law review as well as comparative and doctrinal analyses. The completion of this article will provide privacy researchers with more concrete principles and data to conduct further research on privacy and data protection in Hong Kong and internationally and will provide a basis for policy makers in assessing the rationale and need for a right to be forgotten in Hong Kong.Keywords: privacy, right to be forgotten, data protection, Hong Kong
Procedia PDF Downloads 19023929 Damage Assessment Based on Full-Polarimetric Decompositions in the 2017 Colombia Landslide
Authors: Hyeongju Jeon, Yonghyun Kim, Yongil Kim
Abstract:
Synthetic Aperture Radar (SAR) is an effective tool for damage assessment induced by disasters due to its all-weather and night/day acquisition capability. In this paper, the 2017 Colombia landslide was observed using full-polarimetric ALOS/PALSAR-2 data. Polarimetric decompositions, including the Freeman-Durden decomposition and the Cloude decomposition, are utilized to analyze the scattering mechanisms changes before and after-landslide. These analyses are used to detect the damaged areas induced by the landslide. Experimental results validate the efficiency of the full polarimetric SAR data since the damaged areas can be well discriminated. Thus, we can conclude the proposed method using full polarimetric data has great potential for damage assessment of landslides.Keywords: Synthetic Aperture Radar (SAR), polarimetric decomposition, damage assessment, landslide
Procedia PDF Downloads 39123928 Using Historical Data for Stock Prediction
Authors: Sofia Stoica
Abstract:
In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.Keywords: finance, machine learning, opening price, stock market
Procedia PDF Downloads 19123927 Supervised Learning for Cyber Threat Intelligence
Authors: Jihen Bennaceur, Wissem Zouaghi, Ali Mabrouk
Abstract:
The major aim of cyber threat intelligence (CTI) is to provide sophisticated knowledge about cybersecurity threats to ensure internal and external safeguards against modern cyberattacks. Inaccurate, incomplete, outdated, and invaluable threat intelligence is the main problem. Therefore, data analysis based on AI algorithms is one of the emergent solutions to overcome the threat of information-sharing issues. In this paper, we propose a supervised machine learning-based algorithm to improve threat information sharing by providing a sophisticated classification of cyber threats and data. Extensive simulations investigate the accuracy, precision, recall, f1-score, and support overall to validate the designed algorithm and to compare it with several supervised machine learning algorithms.Keywords: threat information sharing, supervised learning, data classification, performance evaluation
Procedia PDF Downloads 15023926 Thermal Performance of Plate-Fin Heat Sink with Lateral Perforation
Authors: Sakkarin Chingulpitak, Somchai Wongwises
Abstract:
Over the past several decades, the development of electronic devices has led to higher performance. Therefore, an electronic cooling system is important for the electronic device. A heat sink which is a part of the electronic cooling system is continuously studied in the research field to enhance the heat transfer. To author’s best knowledge, there have been only a few articles which reported the thermal performance of plate-fin heat sink with perforation. This research aims to study on the flow and heat transfer characteristics of the solid-fin heat sink (SFHS) and laterally perforated plate-fin heat sink (LAP-PFHS). The SFHS and LAP-PFHSs are investigated on the same fin dimensions. The LAP-PFHSs are performed with a 27 perforation number and two different diameters of circular perforation (3 mm and 5 mm). The experimental study is conducted under various Reynolds numbers from 900 to 2,000 and the heat input of 50W. The experimental results show that the LAP-PFHS with perforation diameter of 5 mm gives the minimum thermal resistance about 25% lower than SFHS. The thermal performance factor which takes into account the ratio of the Nusselt number and ratio of friction factor is used to find the suitable design parameters. The experimental results show that the LAP-PFHS with the perforation diameter of 3 mm provides the thermal performance of 15% greater than SFHS. In addition, the simulation study is presented to investigate the effect of the air flow behavior inside the perforation on the thermal performance of LAP-PFHS.Keywords: heat sink, parallel flow, circular perforation, non-bypass flow
Procedia PDF Downloads 14823925 Enhancement of Mulberry Leaf Yield and Water Productivity in Eastern Dry Zone of Karnataka, India
Authors: Narayanappa Devakumar, Chengalappa Seenappa
Abstract:
The field experiments were conducted during Rabi 2013 and summer 2014 at College of Sericulture, Chintamani, Chickaballapur district, Karnataka, India to find out the response of mulberry to different methods, levels of irrigation and mulching. The results showed that leaf yield and water productivity of mulberry were significantly influenced by different methods, levels of irrigation and mulching. Subsurface drip with lower level of irrigation at 0.8 CPE (Cumulative Pan Evaporation) recorded higher leaf yield and water productivity (42857 kg ha-1 yr-1and 364.41 kg hacm-1) than surface drip with higher level of irrigation at 1.0 CPE (38809 kg ha-1 yr-1 and 264.10 kg hacm-1) and micro spray jet (39931 kg ha-1 yr-1 and 271.83 kg hacm-1). Further, subsurface drip recorded minimum water used to produce one kg of leaf and to earn one rupee of profit (283 L and 113 L) compared to surface drip (390 L and 156 L) and micro spray jet (379 L and 152 L) irrigation methods. Mulberry leaf yield increased and water productivity decreased with increased levels of irrigation. However, these results indicated that irrigation of mulberry with subsurface drip increased leaf yield and water productivity by saving 20% of irrigation water than surface drip and micro spray jet irrigation methods in Eastern Dry Zone (EDZ) of Karnataka.Keywords: cumulative pan evaporation, mulaberry, subsurface drip irrigation, water productivity
Procedia PDF Downloads 28123924 Methodologies, Findings, Discussion, and Limitations in Global, Multi-Lingual Research: We Are All Alone - Chinese Internet Drama
Authors: Patricia Portugal Marques de Carvalho Lourenco
Abstract:
A three-phase methodological multi-lingual path was designed, constructed and carried out using the 2020 Chinese Internet Drama Series We Are All Alone as a case study. Phase one, the backbone of the research, comprised of secondary data analysis, providing the structure on which the next two phases would be built on. Phase one incorporated a Google Scholar and a Baidu Index analysis, Star Network Influence Index and Mydramalist.com top two drama reviews, along with an article written about the drama and scrutiny of Chinese related blogs and websites. Phase two was field research elaborated across Latin Europe, and phase three was social media focused, having into account that perceptions are going to be memory conditioned based on past ideas recall. Overall, research has shown the poor cultural expression of Chinese entertainment in Latin Europe and demonstrated the inexistence of Chinese content in French, Italian, Portuguese and Spanish Business to Consumer retailers; a reflection of their low significance in Latin European markets and the short-life cycle of entertainment products in general, bubble-gum, disposable goods without a mid to long-term effect in consumers lives. The process of conducting comprehensive international research was complex and time-consuming, with data not always available in Mandarin, the researcher’s linguistic deficiency, limited Chinese Cultural Knowledge and cultural equivalence. Despite steps being taken to minimize the international proposed research, theoretical limitations concurrent to Latin Europe and China still occurred. Data accuracy was disputable; sampling, data collection/analysis methods are heterogeneous; ascertaining data requirements and the method of analysis to achieve a construct equivalence was challenging and morose to operationalize. Secondary data was also not often readily available in Mandarin; yet, in spite of the array of limitations, research was done, and results were produced.Keywords: research methodologies, international research, primary data, secondary data, research limitations, online dramas, china, latin europe
Procedia PDF Downloads 6823923 Prevalence, Awareness, and Risk Factors of Diabetes in Ahvaz: South West of Iran
Authors: Leila Yazdanpanah, Hajieh Shahbazian, Seyed Mahmoud Latifi, Armaghan Moravej Aleali, Saeed Ghanbari
Abstract:
Introduction: This study was designed to determine the prevalence of diabetes in people aged over 20 years in Ahvaz, Iran. Material and Methods: The study population selected by cluster sampling. Fasting blood sugar (FBS) assessed after minimum 8 hours night fasting. A questionnaire included: age, sex, weight, height, blood pressure, waist circumference and previous history of diabetes were completed for each patient. FBS≥126mg/dl and/or oral hypoglycemic treatment and/or insulin was defined as diabetes, FBS=100-125 mg/dl as impaired fasting glucose (IFG) and FBS<100mg/dl as normal. Results: Study population was 936 persons (47.2 % male and 52.8% female). The mean age of a population was 42.2±14 years. Diabetes was detected in 15.1 % of population. Only 57cases(6.1%) were aware of their disease and 9% had unknown diabetes. Diabetes was detected in 14.5% of male (11.3% unknown and 3.2 % known diabetes) and in 11.7% of female (7% unknown and 4.7% known diabetes). Prevalence of diabetes had no significant difference (P=0.21) in male and female but unknown diabetes was significantly higher in male (P=0.025). Prevalence of diabetes was increased with rising of age between 20-60 years old but decreasing after 60 years old. Diabetes was related to age, waist circumference and systolic and diastolic blood pressure, TG level and BMI in both sex (P=0.0001). Conclusion: More than half of female and three-fourth of male diabetic patients are unaware of their disease in South of Iran. Diabetes screening should be intensified in this population.Keywords: diabetes, prevalence, risk factor, awareness
Procedia PDF Downloads 46523922 Node Insertion in Coalescence Hidden-Variable Fractal Interpolation Surface
Authors: Srijanani Anurag Prasad
Abstract:
The Coalescence Hidden-variable Fractal Interpolation Surface (CHFIS) was built by combining interpolation data from the Iterated Function System (IFS). The interpolation data in a CHFIS comprises a row and/or column of uncertain values when a single point is entered. Alternatively, a row and/or column of additional points are placed in the given interpolation data to demonstrate the node added CHFIS. There are three techniques for inserting new points that correspond to the row and/or column of nodes inserted, and each method is further classified into four types based on the values of the inserted nodes. As a result, numerous forms of node insertion can be found in a CHFIS.Keywords: fractal, interpolation, iterated function system, coalescence, node insertion, knot insertion
Procedia PDF Downloads 10123921 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment
Authors: U. Yerlikaya, R. T. Balkan
Abstract:
In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.Keywords: A* algorithm, autonomous turrets, high-dimensional C-space, manifold C-space, point clouds
Procedia PDF Downloads 14023920 Optimizing the Efficiency of Measuring Instruments in Ouagadougou-Burkina Faso
Authors: Moses Emetere, Marvel Akinyemi, S. E. Sanni
Abstract:
At the moment, AERONET or AMMA database shows a large volume of data loss. With only about 47% data set available to the scientist, it is evident that accurate nowcast or forecast cannot be guaranteed. The calibration constants of most radiosonde or weather stations are not compatible with the atmospheric conditions of the West African climate. A dispersion model was developed to incorporate salient mathematical representations like a Unified number. The Unified number was derived to describe the turbulence of the aerosols transport in the frictional layer of the lower atmosphere. Fourteen years data set from Multi-angle Imaging SpectroRadiometer (MISR) was tested using the dispersion model. A yearly estimation of the atmospheric constants over Ouagadougou using the model was obtained with about 87.5% accuracy. It further revealed that the average atmospheric constant for Ouagadougou-Niger is a_1 = 0.626, a_2 = 0.7999 and the tuning constants is n_1 = 0.09835 and n_2 = 0.266. Also, the yearly atmospheric constants affirmed the lower atmosphere of Ouagadougou is very dynamic. Hence, it is recommended that radiosonde and weather station manufacturers should constantly review the atmospheric constant over a geographical location to enable about eighty percent data retrieval.Keywords: aerosols retention, aerosols loading, statistics, analytical technique
Procedia PDF Downloads 31523919 Modern Imputation Technique for Missing Data in Linear Functional Relationship Model
Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, Rahmatullah Imon
Abstract:
Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in the LFRM. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators
Procedia PDF Downloads 39923918 Effect on the Performance of the Nano-Particulate Graphite Lubricant in the Turning of AISI 1040 Steel under Variable Machining Conditions
Authors: S. Srikiran, Dharmala Venkata Padmaja, P. N. L. Pavani, R. Pola Rao, K. Ramji
Abstract:
Technological advancements in the development of cutting tools and coolant/lubricant chemistry have enhanced the machining capabilities of hard materials under higher machining conditions. Generation of high temperatures at the cutting zone during machining is one of the most important and pertinent problems which adversely affect the tool life and surface finish of the machined components. Generally, cutting fluids and solid lubricants are used to overcome the problem of heat generation, which is not effectively addressing the problems. With technological advancements in the field of tribology, nano-level particulate solid lubricants are being used nowadays in machining operations, especially in the areas of turning and grinding. The present investigation analyses the effect of using nano-particulate graphite powder as lubricant in the turning of AISI 1040 steel under variable machining conditions and to study its effect on cutting forces, tool temperature and surface roughness of the machined component. Experiments revealed that the increase in cutting forces and tool temperature resulting in the decrease of surface quality with the decrease in the size of nano-particulate graphite powder as lubricant.Keywords: solid lubricant, graphite, minimum quantity lubrication (MQL), nano–particles
Procedia PDF Downloads 27023917 Imputing Missing Data in Electronic Health Records: A Comparison of Linear and Non-Linear Imputation Models
Authors: Alireza Vafaei Sadr, Vida Abedi, Jiang Li, Ramin Zand
Abstract:
Missing data is a common challenge in medical research and can lead to biased or incomplete results. When the data bias leaks into models, it further exacerbates health disparities; biased algorithms can lead to misclassification and reduced resource allocation and monitoring as part of prevention strategies for certain minorities and vulnerable segments of patient populations, which in turn further reduce data footprint from the same population – thus, a vicious cycle. This study compares the performance of six imputation techniques grouped into Linear and Non-Linear models on two different realworld electronic health records (EHRs) datasets, representing 17864 patient records. The mean absolute percentage error (MAPE) and root mean squared error (RMSE) are used as performance metrics, and the results show that the Linear models outperformed the Non-Linear models in terms of both metrics. These results suggest that sometimes Linear models might be an optimal choice for imputation in laboratory variables in terms of imputation efficiency and uncertainty of predicted values.Keywords: EHR, machine learning, imputation, laboratory variables, algorithmic bias
Procedia PDF Downloads 8523916 Hybrid Laser-Gas Metal Arc Welding of ASTM A106-B Steel Pipes
Authors: Masoud Mohammadpour, Nima Yazdian, Radovan Kovacevic
Abstract:
The Oil and Gas industries are vigorously looking for new ways to increase the efficiency of their pipeline constructions. Besides the other approaches, implementing of new welding methods for joining pipes can be the best candidate on this regard. Hybrid Laser Arc Welding (HLAW) with the capabilities of high welding speed, deep penetration, and excellent gap bridging ability can be a possible alternative method in pipeline girth welding. This paper investigates the feasibility of applying the HLAW to join ASTM A106-B as the mostly used piping material for transporting high-temperature and high-pressure fluids and gases. The experiments were carried out on six-inch diameter pipes with the wall thickness of 10mm. AWS ER 70 S6 filler wire with diameter of 1.2mm was employed. Relating to this welding procedure, characterization of welded samples such as hardness, tensile testing and Charpy V-notch testing were performed and the results will be reported in this paper. In order to have better understanding about the thermal history and the microstructural alterations caused by the welding heat cycle, a comprehensive Finite Element (FE) model was also conducted. The obtained results have shown that the Gas Metal Arc Welding (GMAW) procedure with the minimum number of 5 passes to complete the wall thickness, was reduced to only single pass by using the HLAW process with the welding time less than 15s.Keywords: finite element modeling, high-temperature service, hybrid laser/arc welding, welding pipes
Procedia PDF Downloads 208