Search results for: time series data mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38115

Search results for: time series data mining

31065 A Cooperative Transmission Scheme Using Two Sources Based on OFDM System

Authors: Bit-Na Kwon, Dong-Hyun Ha, Hyoung-Kyu Song

Abstract:

In wireless communication, space-time block code (STBC), cyclic delay diversity (CDD) and space-time cyclic delay diversity (STCDD) are used as the spatial diversity schemes and have been widely studied for the reliable communication. If these schemes are used, the communication system can obtain the improved performance. However, the quality of the system is degraded when the distance between a source and a destination is distant in wireless communication system. In this paper, the cooperative transmission scheme using two sources is proposed and improves the performance of the wireless communication system.

Keywords: OFDM, Cooperative communication, CDD, STBC, STCDD

Procedia PDF Downloads 458
31064 Evaluation of Methodologies for Measuring Harmonics and Inter-Harmonics in Photovoltaic Facilities

Authors: Anésio de Leles Ferreira Filho, Wesley Rodrigues de Oliveira, Jéssica Santoro Gonçalves, Jorge Andrés Cormane Angarita

Abstract:

The increase in electric power demand in face of environmental issues has intensified the participation of renewable energy sources such as photovoltaics, in the energy matrix of various countries. Due to their operational characteristics, they can generate time-varying harmonic and inter-harmonic distortions. For this reason, the application of methods of measurement based on traditional Fourier analysis, as proposed by IEC 61000-4-7, can provide inaccurate results. Considering the aspects mentioned herein, came the idea of the development of this work which aims to present the results of a comparative evaluation between a methodology arising from the combination of the Prony method with the Kalman filter and another method based on the IEC 61000-4-30 and IEC 61000-4-7 standards. Employed in this study were synthetic signals and data acquired through measurements in a 50kWp photovoltaic installation.

Keywords: harmonics, inter-harmonics, iec61000-4-7, parametric estimators, photovoltaic generation

Procedia PDF Downloads 469
31063 Risk Assessment on Construction Management with “Fuzzy Logy“

Authors: Mehrdad Abkenari, Orod Zarrinkafsh, Mohsen Ramezan Shirazi

Abstract:

Construction projects initiate in complicated dynamic environments and, due to the close relationships between project parameters and the unknown outer environment, they are faced with several uncertainties and risks. Success in time, cost and quality in large scale construction projects is uncertain in consequence of technological constraints, large number of stakeholders, too much time required, great capital requirements and poor definition of the extent and scope of the project. Projects that are faced with such environments and uncertainties can be well managed through utilization of the concept of risk management in project’s life cycle. Although the concept of risk is dependent on the opinion and idea of management, it suggests the risks of not achieving the project objectives as well. Furthermore, project’s risk analysis discusses the risks of development of inappropriate reactions. Since evaluation and prioritization of construction projects has been a difficult task, the network structure is considered to be an appropriate approach to analyze complex systems; therefore, we have used this structure for analyzing and modeling the issue. On the other hand, we face inadequacy of data in deterministic circumstances, and additionally the expert’s opinions are usually mathematically vague and are introduced in the form of linguistic variables instead of numerical expression. Owing to the fact that fuzzy logic is used for expressing the vagueness and uncertainty, formulation of expert’s opinion in the form of fuzzy numbers can be an appropriate approach. In other words, the evaluation and prioritization of construction projects on the basis of risk factors in real world is a complicated issue with lots of ambiguous qualitative characteristics. In this study, evaluated and prioritization the risk parameters and factors with fuzzy logy method by combination of three method DEMATEL (Decision Making Trial and Evaluation), ANP (Analytic Network Process) and TOPSIS (Technique for Order-Preference by Similarity Ideal Solution) on Construction Management.

Keywords: fuzzy logy, risk, prioritization, assessment

Procedia PDF Downloads 574
31062 Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network

Authors: Souhir Ben Amor, Heni Boubaker, Lotfi Belkacem

Abstract:

This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.

Keywords: electricity price, k-factor GARMA, LLWNN, G-GARCH, forecasting

Procedia PDF Downloads 217
31061 The Effect of Different Cucumber (Cucumis sativus L.) Varieties on Growth and Development Time of Aphis gossypii Glover (Hemiptera: Aphididae)

Authors: Rochelyn Dona, Mohamed F. Nur, Serdar Satar

Abstract:

The biological response of Aphis gossypii Glover (Hom. Aphididae) was investigated on the effects of seven cucumber varieties (Cucumis sativus L.) such as Kitir, Muhika, Ayda, Beit, 14-F1, Ruzgar, and Ptk in the laboratory condition at 24±1°C, 65±5% relative humidity (RH) and a photoperiod of 16:8 (L:D) hour. The results were related that the developmental time of A. gossypii at the nymphal stages was presented a significant difference only on the first instar stage. From the lowest to the highest respectively, 0.98 days on ruzgar to 1.18 days on Kitir, the second nymphal stage 0.98 days to Beit alfa, 1.08 days on Muhika, the third from 0.94 days to Kitir, from 1.16 days to 14-F1, and the last instar 1.22 days on Ptk, 1.48 days on Kitir were investigated. The total development time was evaluated at 4.46 days Beit on alfa 4.72 days on Kitir. The offspring number was 60.42 aphids on ayda and 83.72 aphids on muhika, the significant differences between varieties were based on one-way ANOVA (Tukey test). The lifetime of A. gossypii was recorded 19.10 days on Kitir, 27.64 days on Ptk. The results showed that cucumber cultivars were affected by the biological life of A. gossypii. The combination of this study with the other methods of the IPM tactics can serve as the best strategy for controlling this pest on cucumber varieties into the greenhouse.

Keywords: cucumber cultivars, fecundity, intrinsic rate, mortality, resistance

Procedia PDF Downloads 181
31060 Optimization of Ultrasound Assisted Extraction of Polysaccharides from Plant Waste Materials: Selected Model Material is Hazelnut Skin

Authors: T. Yılmaz, Ş. Tavman

Abstract:

In this study, optimization of ultrasound assisted extraction (UAE) of hemicellulose based polysaccharides from plant waste material has been studied. Selected material is hazelnut skin. Extraction variables for the operation are extraction time, amplitude and application temperature. Optimum conditions have been evaluated depending on responses such as amount of wet crude polysaccharide, total carbohydrate content and dried sample. Pretreated hazelnut skin powders were used for the experiments. 10 grams of samples were suspended in 100 ml water in a jacketed vessel with additional magnetic stirring. Mixture was sonicated by immersing ultrasonic probe processor. After the extraction procedures, ethanol soluble and insoluble sides were separated for further examinations. The obtained experimental data were analyzed by analysis of variance (ANOVA). Second order polynomial models were developed using multiple regression analysis. The individual and interactive effects of applied variables were evaluated by Box Behnken Design. The models developed from the experimental design were predictive and good fit with the experimental data with high correlation coefficient value (R2 more than 0.95). Extracted polysaccharides from hazelnut skin are assumed to be pectic polysaccharides according to the literature survey of Fourier Transform Spectrometry (FTIR) analysis results. No more change can be observed between spectrums of different sonication times. Application of UAE at optimized condition has an important effect on extraction of hemicellulose from plant material by satisfying partial hydrolysis to break the bounds with other components in plant cell wall material. This effect can be summarized by varied intensity of microjets and microstreaming at varied sonication conditions.

Keywords: hazelnut skin, optimization, polysaccharide, ultrasound assisted extraction

Procedia PDF Downloads 318
31059 An ANN-Based Predictive Model for Diagnosis and Forecasting of Hypertension

Authors: Obe Olumide Olayinka, Victor Balanica, Eugen Neagoe

Abstract:

The effects of hypertension are often lethal thus its early detection and prevention is very important for everybody. In this paper, a neural network (NN) model was developed and trained based on a dataset of hypertension causative parameters in order to forecast the likelihood of occurrence of hypertension in patients. Our research goal was to analyze the potential of the presented NN to predict, for a period of time, the risk of hypertension or the risk of developing this disease for patients that are or not currently hypertensive. The results of the analysis for a given patient can support doctors in taking pro-active measures for averting the occurrence of hypertension such as recommendations regarding the patient behavior in order to lower his hypertension risk. Moreover, the paper envisages a set of three example scenarios in order to determine the age when the patient becomes hypertensive, i.e. determine the threshold for hypertensive age, to analyze what happens if the threshold hypertensive age is set to a certain age and the weight of the patient if being varied, and, to set the ideal weight for the patient and analyze what happens with the threshold of hypertensive age.

Keywords: neural network, hypertension, data set, training set, supervised learning

Procedia PDF Downloads 378
31058 The Integration of Patient Health Record Generated from Wearable and Internet of Things Devices into Health Information Exchanges

Authors: Dalvin D. Hill, Hector M. Castro Garcia

Abstract:

A growing number of individuals utilize wearable devices on a daily basis. The usage and functionality of these wearable devices vary from user to user. One popular usage of said devices is to track health-related activities that are typically stored on a device’s memory or uploaded to an account in the cloud; based on the current trend, the data accumulated from the wearable device are stored in a standalone location. In many of these cases, this health related datum is not a factor when considering the holistic view of a user’s health lifestyle or record. This health-related data generated from wearable and Internet of Things (IoT) devices can serve as empirical information to a medical provider, as the standalone data can add value to the holistic health record of a patient. This paper proposes a solution to incorporate the data gathered from these wearable and IoT devices, with that a patient’s Personal Health Record (PHR) stored within the confines of a Health Information Exchange (HIE).

Keywords: electronic health record, health information exchanges, internet of things, personal health records, wearable devices, wearables

Procedia PDF Downloads 112
31057 System Identification in Presence of Outliers

Authors: Chao Yu, Qing-Guo Wang, Dan Zhang

Abstract:

The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.

Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising

Procedia PDF Downloads 294
31056 Effect of Austenitizing Temperature, Soaking Time and Grain Size on Charpy Impact Toughness of Quenched and Tempered Steel

Authors: S. Gupta, R. Sarkar, S. Pathak, D. H. Kela, A. Pramanick, P. Talukdar

Abstract:

Low alloy quenched and tempered steels are typically used in cast railway components such as knuckles, yokes, and couplers. Since these components experience extensive impact loading during their service life, adequate impact toughness of these grades need to be ensured to avoid catastrophic failure of parts in service. Because of the general availability of Charpy V Test equipment, Charpy test is the most common and economical means to evaluate the impact toughness of materials and is generally used in quality control applications. With this backdrop, an experiment was designed to evaluate the effect of austenitizing temperature, soaking time and resultant grain size on the Charpy impact toughness and the related fracture mechanisms in a quenched and tempered low alloy steel, with the aim of optimizing the heat treatment parameters (i.e. austenitizing temperature and soaking time) with respect to impact toughness. In the first phase, samples were austenitized at different temperatures viz. 760, 800, 840, 880, 920 and 960°C, followed by quenching and tempering at 600°C for 4 hours. In the next phase, samples were subjected to different soaking times (0, 2, 4 and 6 hours) at a fixed austenitizing temperature (980°C), followed by quenching and tempering at 600°C for 4 hours. The samples corresponding to different test conditions were then subjected to instrumented Charpy tests at -40°C and energy absorbed were recorded. Subsequently, microstructure and fracture surface of samples corresponding to different test conditions were observed under scanning electron microscope, and the corresponding grain sizes were measured. In the final stage, austenitizing temperature, soaking time and measured grain sizes were correlated with impact toughness and the fracture morphology and mechanism.

Keywords: heat treatment, grain size, microstructure, retained austenite and impact toughness

Procedia PDF Downloads 319
31055 Defining a Reference Architecture for Predictive Maintenance Systems: A Case Study Using the Microsoft Azure IoT-Cloud Components

Authors: Walter Bernhofer, Peter Haber, Tobias Mayer, Manfred Mayr, Markus Ziegler

Abstract:

Current preventive maintenance measures are cost intensive and not efficient. With the available sensor data of state of the art internet of things devices new possibilities of automated data processing emerge. Current advances in data science and in machine learning enable new, so called predictive maintenance technologies, which empower data scientists to forecast possible system failures. The goal of this approach is to cut expenses in preventive maintenance by automating the detection of possible failures and to improve efficiency and quality of maintenance measures. Additionally, a centralization of the sensor data monitoring can be achieved by using this approach. This paper describes the approach of three students to define a reference architecture for a predictive maintenance solution in the internet of things domain with a connected smartphone app for service technicians. The reference architecture is validated by a case study. The case study is implemented with current Microsoft Azure cloud technologies. The results of the case study show that the reference architecture is valid and can be used to achieve a system for predictive maintenance execution with the cloud components of Microsoft Azure. The used concepts are technology platform agnostic and can be reused in many different cloud platforms. The reference architecture is valid and can be used in many use cases, like gas station maintenance, elevator maintenance and many more.

Keywords: case study, internet of things, predictive maintenance, reference architecture

Procedia PDF Downloads 230
31054 Electromyography Analysis during Walking and Seated Stepping in the Elderly

Authors: P. Y. Chiang, Y. H. Chen, Y. J. Lin, C. C. Chang, W. C. Hsu

Abstract:

The number of the elderly in the world population and the rate of falls in this increasing numbers of older people are increasing. Decreasing muscle strength and an increasing risk of falling are associated with the ageing process. Because the effects of seated stepping training on the walking performance in the elderly remain unclear, the main purpose of the proposed study is to perform electromyography analysis during walking and seated stepping in the elderly. Four surface EMG electrodes were sticked on the surface of lower limbs muscles, including vastus lateralis (VL), and gastrocnemius (GT) of both sides. Before test, maximal voluntary contraction (MVC) of the respective muscle was obtained using manual muscle testing. The analog raw data of EMG signals were digitized with a sampling frequency of 2000 Hz. The signals were fully rectified and the linear envelope were calculated. Stepping motion cycle was separated into two phases by stepping timing (ST) and pedal return timing (PRT). ST refer to the time when the pedal marker reached the highest height, representing the contra-lateral leg was going to release the pedal. PRT refer to the time when the pedal marker reached the lowest height, representing the contra-lateral leg was going to step the pedal. We assumed that ST acted the same role in initial contact during walking, and PRT for toe-off. The period from ST to next PRT was called pushing phase (PP), during which the leg would start to step with resistance, and we compare this phase with the stance phase in level walking. The period from PRT to next ST was called returning phase (RP), during which leg would not have any resistance in this phase, and we compare this phase with the swing phase in level walking. VL and Gastro muscular activation had similar patterns in both side. The ability may transfer to those needed during loading response, mid-stance and terminal swing phase. User needed to make more effort in stepping compared with walking with similar timing; thus the strengthening of the VL and Gastro may be helpful to improve the walking endurance and efficiency for the elderly.

Keywords: elderly, electromyography, seated stepping, walking

Procedia PDF Downloads 209
31053 The Impact of the COVID-19 Pandemic on the Nursing Workforce in Slovakia

Authors: Lukas Kober, Vladimir Littva, Vladimir Siska

Abstract:

The pandemic has had a significant impact on our lives. One of the most affected professions is the nursing profession. Nurses are closest to the patient, spend the most time with him, support him, often replace the closest family members, and of course, are part of the whole treatment process. Current nurses have more competencies and roles than in the past. The healthcare system has reached a turning point, also in connection with the spreading Delta variant and the risk of the arrival of the third wave. The lack of nurses is a long-term problem, but it did not arise by itself. The reasons for the departure of nurses from the health care system are not only due to the increasing average age of nurses and midwives in Slovakia and their retirement. Thousands of nurses are leaving due to poor working conditions, low wages, and poor management of individual workplaces. We need to keep older nurses in the health care system, otherwise, we risk their early departure. The pandemic only exacerbates this situation, and the associated risks, such as occupational infections or enormous overload and exhaustion, only accelerate the exit from the profession. According to current data from the register of nurses and midwives, we canceled 772 registrations from January to September 2021, and 584 nurses requested the suspension of registration due to non-performance of the profession. During the same period, we registered only 240 new nurses graduate. We have had this significant disparity here for a long time. For the whole of 2020, we canceled 911 registrations and suspended 973 registrations. We registered a total of 389 graduates. Our system loses hundreds of graduates a year and loses experienced nurses with decades of experience who leave due to poor working conditions, wages and suffer from burnout. Such compensation should also be awarded to the families of health professionals who have lost their lives due to work and to COVID-19. These options can also be motivating for promising people interested in studying nursing, who can gradually replace the missing workforce. This purchase is supported by the KEGA project no. 015KU-4/2019.

Keywords: pandemic, COVID-19, nursing, nursing workforce, lack of nurses

Procedia PDF Downloads 198
31052 Cross-Sectional Study of Critical Parameters on RSET and Decision-Making of At-Risk Groups in Fire Evacuation

Authors: Naser Kazemi Eilaki, Ilona Heldal, Carolyn Ahmer, Bjarne Christian Hagen

Abstract:

Elderly people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to a safe place. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. While earlier studies have frequently addressed quantitative measurements regarding at-risk groups' physical characteristics (e.g., their speed of travel), this paper considers the influence of at-risk groups’ characteristics on their decision and determining better escape routes. Most of evacuation models are based on mapping people's movement and their behaviour to summation times for common activity types on a timeline. Usually, timeline models estimate required safe egress time (RSET) as a sum of four timespans: detection, alarm, premovement, and movement time, and compare this with the available safe egress time (ASET) to determine what is influencing the margin of safety.This paper presents a cross-sectional study for identifying the most critical items on RSET and people's decision-making and with possibilities to include safety knowledge regarding people with physical or cognitive functional impairments. The result will contribute to increased knowledge on considering at-risk groups and disabilities for designing and developing safe escape routes. The expected results can be an asset to predict the probabilistic behavioural pattern of at-risk groups and necessary components for defining a framework for understanding how stakeholders can consider various disabilities when determining the margin of safety for a safe escape route.

Keywords: fire safety, evacuation, decision-making, at-risk groups

Procedia PDF Downloads 88
31051 Enhancing Student Learning Outcomes Using Engineering Design Process: Case Study in Physics Course

Authors: Thien Van Ngo

Abstract:

The engineering design process is a systematic approach to solving problems. It involves identifying a problem, brainstorming solutions, prototyping and testing solutions, and evaluating the results. The engineering design process can be used to teach students how to solve problems in a creative and innovative way. The research aim of this study was to investigate the effectiveness of using the engineering design process to enhance student learning outcomes in a physics course. A mixed research method was used in this study. The quantitative data were collected using a pretest-posttest control group design. The qualitative data were collected using semi-structured interviews. The sample was 150 first-year students in the Department of Mechanical Engineering Technology at Cao Thang Technical College in Vietnam in the 2022-2023 school year. The quantitative data were collected using a pretest-posttest control group design. The pretest was administered to both groups at the beginning of the study. The posttest was administered to both groups at the end of the study. The qualitative data were collected using semi-structured interviews with a sample of eight students in the experimental group. The interviews were conducted after the posttest. The quantitative data were analyzed using independent sample T-tests. The qualitative data were analyzed using thematic analysis. The quantitative data showed that students in the experimental group, who were taught using the engineering design process, had significantly higher post-test scores on physics problem-solving than students in the control group, who were taught using the conventional method. The qualitative data showed that students in the experimental group were more motivated and engaged in the learning process than students in the control group. Students in the experimental group also reported that they found the engineering design process to be a more effective way of learning physics. The findings of this study suggest that the engineering design process can be an effective way of enhancing student learning outcomes in physics courses. The engineering design process engages students in the learning process and helps them to develop problem-solving skills.

Keywords: engineering design process, problem-solving, learning outcome of physics, students’ physics competencies, deep learning

Procedia PDF Downloads 57
31050 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank

Authors: Jalal Haghighat Monfared, Zahra Akbari

Abstract:

Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.

Keywords: business intelligence, business intelligence capability, decision making, decision quality

Procedia PDF Downloads 101
31049 Modelling of Geotechnical Data Using Geographic Information System and MATLAB for Eastern Ahmedabad City, Gujarat

Authors: Rahul Patel

Abstract:

Ahmedabad, a city located in western India, is experiencing rapid growth due to urbanization and industrialization. It is projected to become a metropolitan city in the near future, resulting in various construction activities. Soil testing is necessary before construction can commence, requiring construction companies and contractors to periodically conduct soil testing. The focus of this study is on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical (Geo)-database involves three steps: collecting borehole data from reputable sources, verifying the accuracy and redundancy of the data, and standardizing and organizing the geotechnical information for integration into the database. Once the database is complete, it is integrated with GIS, allowing users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. This GIS map enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This study highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers.

Keywords: ArcGIS, borehole data, geographic information system, geo-database, interpolation, SPT N-value, soil classification, Φ-Value, bearing capacity

Procedia PDF Downloads 60
31048 Effects of Collection Time on Chemical Composition of Leaf Essential Oils of Hoslundia opposita

Authors: O. E. Ogunjinmi, N. O. Olawore, L. A. Usman, S. O. Ogunjinmi

Abstract:

An essential oil is any concentrated, hydrophobic liquid containing volatile aroma compounds produced by plants. It has been established that several factors affect the component of the plants such as the texture of the soil, relative humidity, wind, and collection time. This study is aimed at investigating the effect of collection time on the chemical composition of this essential oil. Pulverized leaves (500 g) of Hoslundia opposite harvested in the morning (7 am) and afternoon (2 pm) of the same day were separately hydrodistilled using Clevenger apparatus to obtain the essential oils from the leaves. The leaf oils collected in the morning (7 am) and afternoon (2 pm) harvests yielded 0.54 and 0.65 %w/w respectively. Analysis of the leaf oil obtained in the morning, using gas chromatography (GC) and gas chromatography combined mass spectrometry (GC-MS) revealed the presence of twenty-three (23) compounds which made up 81.8% of the total oil while nineteen (19) compounds (93.2%) were identified in the afternoon leaf essential oil. The most abundant components of the leaf oil collected in the morning (7 am) harvest were p-cymene (28.7%), sabinene (7.1%) and 1,8-cineole (6.6%) Meanwhile the major components of leaf oil in the afternoon (2 pm) harvest were p-cymene (26.4%), thymol (15.3%), 1,8-cineole (15.0%) and g-terpinene (10.4%). The composition pattern of leaf oil obtained in the morning and afternoon harvests of Hoslundia opposite revealed significant differences in qualitative and quantitative.

Keywords: essential oil, Hoslundia opposita, para cymene, 1, 8-cineole

Procedia PDF Downloads 377
31047 Using TRACE and SNAP Codes to Establish the Model of Maanshan PWR for SBO Accident

Authors: B. R. Shen, J. R. Wang, J. H. Yang, S. W. Chen, C. Shih, Y. Chiang, Y. F. Chang, Y. H. Huang

Abstract:

In this research, TRACE code with the interface code-SNAP was used to simulate and analyze the SBO (station blackout) accident which occurred in Maanshan PWR (pressurized water reactor) nuclear power plant (NPP). There are four main steps in this research. First, the SBO accident data of Maanshan NPP were collected. Second, the TRACE/SNAP model of Maanshan NPP was established by using these data. Third, this TRACE/SNAP model was used to perform the simulation and analysis of SBO accident. Finally, the simulation and analysis of SBO with mitigation equipments was performed. The analysis results of TRACE are consistent with the data of Maanshan NPP. The mitigation equipments of Maanshan can maintain the safety of Maanshan in the SBO according to the TRACE predictions.

Keywords: pressurized water reactor (PWR), TRACE, station blackout (SBO), Maanshan

Procedia PDF Downloads 177
31046 Variables, Annotation, and Metadata Schemas for Early Modern Greek

Authors: Eleni Karantzola, Athanasios Karasimos, Vasiliki Makri, Ioanna Skouvara

Abstract:

Historical linguistics unveils the historical depth of languages and traces variation and change by analyzing linguistic variables over time. This field of linguistics usually deals with a closed data set that can only be expanded by the (re)discovery of previously unknown manuscripts or editions. In some cases, it is possible to use (almost) the entire closed corpus of a language for research, as is the case with the Thesaurus Linguae Graecae digital library for Ancient Greek, which contains most of the extant ancient Greek literature. However, concerning ‘dynamic’ periods when the production and circulation of texts in printed as well as manuscript form have not been fully mapped, representative samples and corpora of texts are needed. Such material and tools are utterly lacking for Early Modern Greek (16th-18th c.). In this study, the principles of the creation of EMoGReC, a pilot representative corpus of Early Modern Greek (16th-18th c.) are presented. Its design follows the fundamental principles of historical corpora. The selection of texts aims to create a representative and balanced corpus that gives insight into diachronic, diatopic and diaphasic variation. The pilot sample includes data derived from fully machine-readable vernacular texts, which belong to 4-5 different textual genres and come from different geographical areas. We develop a hierarchical linguistic annotation scheme, further customized to fit the characteristics of our text corpus. Regarding variables and their variants, we use as a point of departure the bundle of twenty-four features (or categories of features) for prose demotic texts of the 16th c. Tags are introduced bearing the variants [+old/archaic] or [+novel/vernacular]. On the other hand, further phenomena that are underway (cf. The Cambridge Grammar of Medieval and Early Modern Greek) are selected for tagging. The annotated texts are enriched with metalinguistic and sociolinguistic metadata to provide a testbed for the development of the first comprehensive set of tools for the Greek language of that period. Based on a relational management system with interconnection of data, annotations, and their metadata, the EMoGReC database aspires to join a state-of-the-art technological ecosystem for the research of observed language variation and change using advanced computational approaches.

Keywords: early modern Greek, variation and change, representative corpus, diachronic variables.

Procedia PDF Downloads 49
31045 Optimization of Process Parameters for Copper Extraction from Wastewater Treatment Sludge by Sulfuric Acid

Authors: Usarat Thawornchaisit, Kamalasiri Juthaisong, Kasama Parsongjeen, Phonsiri Phoengchan

Abstract:

In this study, sludge samples that were collected from the wastewater treatment plant of a printed circuit board manufacturing industry in Thailand were subjected to acid extraction using sulfuric acid as the chemical extracting agent. The effects of sulfuric acid concentration (A), the ratio of a volume of acid to a quantity of sludge (B) and extraction time (C) on the efficiency of copper extraction were investigated with the aim of finding the optimal conditions for maximum removal of copper from the wastewater treatment sludge. Factorial experimental design was employed to model the copper extraction process. The results were analyzed statistically using analysis of variance to identify the process variables that were significantly affected the copper extraction efficiency. Results showed that all linear terms and an interaction term between volume of acid to quantity of sludge ratio and extraction time (BC), had statistically significant influence on the efficiency of copper extraction under tested conditions in which the most significant effect was ascribed to volume of acid to quantity of sludge ratio (B), followed by sulfuric acid concentration (A), extraction time (C) and interaction term of BC, respectively. The remaining two-way interaction terms, (AB, AC) and the three-way interaction term (ABC) is not statistically significant at the significance level of 0.05. The model equation was derived for the copper extraction process and the optimization of the process was performed using a multiple response method called desirability (D) function to optimize the extraction parameters by targeting maximum removal. The optimum extraction conditions of 99% of copper were found to be sulfuric acid concentration: 0.9 M, ratio of the volume of acid (mL) to the quantity of sludge (g) at 100:1 with an extraction time of 80 min. Experiments under the optimized conditions have been carried out to validate the accuracy of the Model.

Keywords: acid treatment, chemical extraction, sludge, waste management

Procedia PDF Downloads 184
31044 Determination of the Minimum Time and the Optimal Trajectory of a Moving Robot Using Picard's Method

Authors: Abbes Lounis, Kahina Louadj, Mohamed Aidene

Abstract:

This paper presents an optimal control problem applied to a robot; the problem is to determine a command which makes it possible to reach a final state from a given initial state in record time. The approach followed to solve this optimization problem with constraints on the control starts by presenting the equations of motion of the dynamic system then by applying Pontryagin's maximum principle (PMP) to determine the optimal control, and Picard's successive approximation method combined with the shooting method to solve the resulting differential system.

Keywords: robotics, Pontryagin's Maximum Principle, PMP, Picard's method, shooting method, non-linear differential systems

Procedia PDF Downloads 243
31043 A Comparative and Doctrinal Analysis towards the Investigation of a Right to Be Forgotten in Hong Kong

Authors: Jojo Y. C. Mo

Abstract:

Memories are good. They remind us of people, places and experiences that we cherish. But memories cannot be changed and there may well be memories that we do not want to remember. This is particularly true in relation to information which causes us embarrassment and humiliation or simply because it is private – we all want to erase or delete such information. This desire to delete is recently recognised by the Court of Justice of the European Union in the 2014 case of Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González in which the court ordered Google to remove links to some information about the complainant which he wished to be removed. This so-called ‘right to be forgotten’ received serious attention and significantly, the European Council and the European Parliament enacted the General Data Protection Regulation (GDPR) to provide a more structured and normative framework for implementation of right to be forgotten across the EU. This development in data protection laws will, undoubtedly, have significant impact on companies and co-operations not just within the EU but outside as well. Hong Kong, being one of the world’s leading financial and commercial center as well as one of the first jurisdictions in Asia to implement a comprehensive piece of data protection legislation, is therefore a jurisdiction that is worth looking into. This article/project aims to investigate the following: a) whether there is a right to be forgotten under the existing Hong Kong data protection legislation b) if not, whether such a provision is necessary and why. This article utilises a comparative methodology based on a study of primary and secondary resources, including scholarly articles, government and law commission reports and working papers and relevant international treaties, constitutional documents, case law and legislation. The author will primarily engage literature and case-law review as well as comparative and doctrinal analyses. The completion of this article will provide privacy researchers with more concrete principles and data to conduct further research on privacy and data protection in Hong Kong and internationally and will provide a basis for policy makers in assessing the rationale and need for a right to be forgotten in Hong Kong.

Keywords: privacy, right to be forgotten, data protection, Hong Kong

Procedia PDF Downloads 171
31042 Damage Assessment Based on Full-Polarimetric Decompositions in the 2017 Colombia Landslide

Authors: Hyeongju Jeon, Yonghyun Kim, Yongil Kim

Abstract:

Synthetic Aperture Radar (SAR) is an effective tool for damage assessment induced by disasters due to its all-weather and night/day acquisition capability. In this paper, the 2017 Colombia landslide was observed using full-polarimetric ALOS/PALSAR-2 data. Polarimetric decompositions, including the Freeman-Durden decomposition and the Cloude decomposition, are utilized to analyze the scattering mechanisms changes before and after-landslide. These analyses are used to detect the damaged areas induced by the landslide. Experimental results validate the efficiency of the full polarimetric SAR data since the damaged areas can be well discriminated. Thus, we can conclude the proposed method using full polarimetric data has great potential for damage assessment of landslides.

Keywords: Synthetic Aperture Radar (SAR), polarimetric decomposition, damage assessment, landslide

Procedia PDF Downloads 377
31041 Reliability Analysis of Construction Schedule Plan Based on Building Information Modelling

Authors: Lu Ren, You-Liang Fang, Yan-Gang Zhao

Abstract:

In recent years, the application of BIM (Building Information Modelling) to construction schedule plan has been the focus of more and more researchers. In order to assess the reasonable level of the BIM-based construction schedule plan, that is whether the schedule can be completed on time, some researchers have introduced reliability theory to evaluate. In the process of evaluation, the uncertain factors affecting the construction schedule plan are regarded as random variables, and probability distributions of the random variables are assumed to be normal distribution, which is determined using two parameters evaluated from the mean and standard deviation of statistical data. However, in practical engineering, most of the uncertain influence factors are not normal random variables. So the evaluation results of the construction schedule plan will be unreasonable under the assumption that probability distributions of random variables submitted to the normal distribution. Therefore, in order to get a more reasonable evaluation result, it is necessary to describe the distribution of random variables more comprehensively. For this purpose, cubic normal distribution is introduced in this paper to describe the distribution of arbitrary random variables, which is determined by the first four moments (mean, standard deviation, skewness and kurtosis). In this paper, building the BIM model firstly according to the design messages of the structure and making the construction schedule plan based on BIM, then the cubic normal distribution is used to describe the distribution of the random variables due to the collecting statistical data of the random factors influencing construction schedule plan. Next the reliability analysis of the construction schedule plan based on BIM can be carried out more reasonably. Finally, the more accurate evaluation results can be given providing reference for the implementation of the actual construction schedule plan. In the last part of this paper, the more efficiency and accuracy of the proposed methodology for the reliability analysis of the construction schedule plan based on BIM are conducted through practical engineering case.

Keywords: BIM, construction schedule plan, cubic normal distribution, reliability analysis

Procedia PDF Downloads 126
31040 Supervised Learning for Cyber Threat Intelligence

Authors: Jihen Bennaceur, Wissem Zouaghi, Ali Mabrouk

Abstract:

The major aim of cyber threat intelligence (CTI) is to provide sophisticated knowledge about cybersecurity threats to ensure internal and external safeguards against modern cyberattacks. Inaccurate, incomplete, outdated, and invaluable threat intelligence is the main problem. Therefore, data analysis based on AI algorithms is one of the emergent solutions to overcome the threat of information-sharing issues. In this paper, we propose a supervised machine learning-based algorithm to improve threat information sharing by providing a sophisticated classification of cyber threats and data. Extensive simulations investigate the accuracy, precision, recall, f1-score, and support overall to validate the designed algorithm and to compare it with several supervised machine learning algorithms.

Keywords: threat information sharing, supervised learning, data classification, performance evaluation

Procedia PDF Downloads 134
31039 Multimodal Discourse, Logic of the Analysis of Transmedia Strategies

Authors: Bianca Suárez Puerta

Abstract:

Multimodal discourse refers to a method of study the media continuum between reality, screens as a device, audience, author, and media as a production from the audience. For this study we used semantic differential, a method proposed in the sixties by Osgood, Suci and Tannenbaum, starts from the assumption that under each particular way of perceiving the world, in each singular idea, there is a common cultural meaning that organizes experiences. In relation to these shared symbolic dimension, this method has had significant results, as it focuses on breaking down the meaning of certain significant acts into series of statements that place the subjects in front of some concepts. In Colombia, in 2016, a tool was designed to measure the meaning of a multimodal production, specially the acts of sense of transmedia productions that managed to receive funds from the Ministry of ICT of Colombia, and also, to analyze predictable patterns that can be found in calls and funds aimed at the production of culture in Colombia, in the context of the peace agreement, as a request for expressions from a hegemonic place, seeking to impose a worldview.

Keywords: semantic differential, semiotics, transmedia, critical analysis of discourse

Procedia PDF Downloads 196
31038 A Connected Structure of All-Optical Logic Gate “NOT-AND”

Authors: Roumaissa Derdour, Lebbal Mohamed Redha

Abstract:

We present a study of the transmission of the all-optical logic gate using a structure connected with a triangular photonic crystal lattice that is improved. The proposed logic gate consists of a photonic crystal nano-resonator formed by changing the size of the air holes. In addition to the simplicity, the response time is very short, and the designed nano-resonator increases the bit rate of the logic gate. The two-dimensional finite difference time domain (2DFDTD) method is used to simulate the structure; the transmission obtained is about 98% with very negligible losses. The proposed photonic crystal AND logic gate is widely used in future integrated optical microelectronics.

Keywords: logic gates, photonic crystals, optical integrated circuits, resonant cavities

Procedia PDF Downloads 82
31037 Duplex Real-Time Loop-Mediated Isothermal Amplification Assay for Simultaneous Detection of Beef and Pork

Authors: Mi-Ju Kim, Hae-Yeong Kim

Abstract:

Product mislabeling and adulteration have been increasing the concerns in processed meat products. Relatively inexpensive pork meat compared to meat such as beef was adulterated for economic benefit. These food fraud incidents related to pork were concerned due to economic, religious and health reasons. In this study, a rapid on-site detection method using loop-mediated isothermal amplification (LAMP) was developed for the simultaneous identification of beef and pork. Each specific LAMP primer for beef and pork was designed targeting on mitochondrial D-loop region. The LAMP assay reaction was performed at 65 ℃ for 40 min. The specificity of each primer for beef and pork was evaluated using DNAs extracted from 13 animal species including beef and pork. The sensitivity of duplex LAMP assay was examined by serial dilution of beef and pork DNAs, and reference binary mixtures. This assay was applied to processed meat products including beef and pork meat for monitoring. Each set of primers amplified only the targeted species with no cross-reactivity with animal species. The limit of detection of duplex real-time LAMP was 1 pg for each DNA of beef and pork and 1% pork in a beef-meat mixture. Commercial meat products that declared the presence of beef and/or pork meat on the label showed positive results for those species. This method was successfully applied to detect simultaneous beef and pork meats in processed meat products. The optimized duplex LAMP assay can identify simultaneously beef and pork meat within less than 40 min. A portable real-time fluorescence device used in this study is applicable for on-site detection of beef and pork in processed meat products. Thus, this developed assay was considered to be an efficient tool for monitoring meat products.

Keywords: beef, duplex real-time LAMP, meat identification, pork

Procedia PDF Downloads 207
31036 Comparing UV-based and O₃-Based AOPs for Removal of Emerging Contaminants from Food Processing Digestate Sludge

Authors: N. Moradi, C. M. Lopez-Vazquez, H. Garcia Hernandez, F. Rubio Rincon, D. Brdanovic, Mark van Loosdrecht

Abstract:

Advanced oxidation processes have been widely used for disinfection, removal of residual organic material, and for the removal of emerging contaminants from drinking water and wastewater. Yet, the application of these technologies to sludge treatment processes has not gained enough attention, mostly, considering the complexity of the sludge matrix. In this research, ozone and UV/H₂O₂ treatment were applied for the removal of emerging contaminants from a digestate supernatant. The removal of the following compounds was assessed:(i) salicylic acid (SA) (a surrogate of non-stradiol anti-inflammatory drugs (NSAIDs)), and (ii) sulfamethoxazole (SMX), sulfamethazine (SMN), and tetracycline (TCN) (the most frequent human and animal antibiotics). The ozone treatment was carried out in a plexiglass bubble column reactor with a capacity of 2.7 L; the system was equipped with a stirrer and a gas diffuser. The UV and UV/H₂O₂ treatments were done using a LED set-up (PearlLab beam device) dosing H₂O₂. In the ozone treatment evaluations, 95 % of the three antibiotics were removed during the first 20 min of exposure time, while an SA removal of 91 % occurred after 8 hours of exposure time. In the UV treatment evaluations, when adding the optimum dose of hydrogen peroxide (H₂O₂:COD molar ratio of 0.634), 36% of SA, 82% of TCN, and more than 90 % of both SMX and SMN were removed after 8 hours of exposure time. This study concluded that O₃ was more effective than UV/H₂O₂ in removing emerging contaminants from the digestate supernatant.

Keywords: digestate sludge, emerging contaminants, ozone, UV-AOP

Procedia PDF Downloads 89