Search results for: threshold models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7208

Search results for: threshold models

5708 Decision Support System for the Management of the Shandong Peninsula, China

Authors: Natacha Fery, Guilherme L. Dalledonne, Xiangyang Zheng, Cheng Tang, Roberto Mayerle

Abstract:

A Decision Support System (DSS) for supporting decision makers in the management of the Shandong Peninsula has been developed. Emphasis has been given to coastal protection, coastal cage aquaculture and harbors. The investigations were done in the framework of a joint research project funded by the German Ministry of Education and Research (BMBF) and the Chinese Academy of Sciences (CAS). In this paper, a description of the DSS, the development of its components, and results of its application are presented. The system integrates in-situ measurements, process-based models, and a database management system. Numerical models for the simulation of flow, waves, sediment transport and morphodynamics covering the entire Bohai Sea are set up based on the Delft3D modelling suite (Deltares). Calibration and validation of the models were realized based on the measurements of moored Acoustic Doppler Current Profilers (ADCP) and High Frequency (HF) radars. In order to enable cost-effective and scalable applications, a database management system was developed. It enhances information processing, data evaluation, and supports the generation of data products. Results of the application of the DSS to the management of coastal protection, coastal cage aquaculture and harbors are presented here. Model simulations covering the most severe storms observed during the last decades were carried out leading to an improved understanding of hydrodynamics and morphodynamics. Results helped in the identification of coastal stretches subjected to higher levels of energy and improved support for coastal protection measures.

Keywords: coastal protection, decision support system, in-situ measurements, numerical modelling

Procedia PDF Downloads 181
5707 Determination Power and Sample Size Zero-Inflated Negative Binomial Dependent Death Rate of Age Model (ZINBD): Regression Analysis Mortality Acquired Immune Deficiency De ciency Syndrome (AIDS)

Authors: Mohd Asrul Affendi Bin Abdullah

Abstract:

Sample size calculation is especially important for zero inflated models because a large sample size is required to detect a significant effect with this model. This paper verify how to present percentage of power approximation for categorical and then extended to zero inflated models. Wald test was chosen to determine power sample size of AIDS death rate because it is frequently used due to its approachability and its natural for several major recent contribution in sample size calculation for this test. Power calculation can be conducted when covariates are used in the modeling ‘excessing zero’ data and assist categorical covariate. Analysis of AIDS death rate study is used for this paper. Aims of this study to determine the power of sample size (N = 945) categorical death rate based on parameter estimate in the simulation of the study.

Keywords: power sample size, Wald test, standardize rate, ZINBDR

Procedia PDF Downloads 426
5706 Fast Fourier Transform-Based Steganalysis of Covert Communications over Streaming Media

Authors: Jinghui Peng, Shanyu Tang, Jia Li

Abstract:

Steganalysis seeks to detect the presence of secret data embedded in cover objects, and there is an imminent demand to detect hidden messages in streaming media. This paper shows how a steganalysis algorithm based on Fast Fourier Transform (FFT) can be used to detect the existence of secret data embedded in streaming media. The proposed algorithm uses machine parameter characteristics and a network sniffer to determine whether the Internet traffic contains streaming channels. The detected streaming data is then transferred from the time domain to the frequency domain through FFT. The distributions of power spectra in the frequency domain between original VoIP streams and stego VoIP streams are compared in turn using t-test, achieving the p-value of 7.5686E-176 which is below the threshold. The results indicate that the proposed FFT-based steganalysis algorithm is effective in detecting the secret data embedded in VoIP streaming media.

Keywords: steganalysis, security, Fast Fourier Transform, streaming media

Procedia PDF Downloads 130
5705 Technology Adoption Models: A Study on Brick Kiln Firms in Punjab

Authors: Ajay Kumar, Shamily Jaggi

Abstract:

In developing countries like India development of modern technologies has been a key determinant in accelerating industrialization and urbanization. But in the pursuit of rapid economic growth, development is considered a top priority, while environmental protection is not given the same importance. Thus, a number of industries sited haphazardly have been established, leading to a deterioration of natural resources like water, soil and air. As a result, environmental pollution is tremendously increasing due to industrialization and mechanization that are serving to fulfill the demands of the population. With the increasing population, demand for bricks for construction work is also increasing, establishing the brick industry as a growing industry. Brick production requires two main resources; water as a source of life, and soil, as a living environment. Water and soil conservation is a critical issue in areas facing scarcity of water and soil resources. The purpose of this review paper is to provide a brief overview of the theoretical frameworks used in the analysis of the adoption and/or acceptance of soil and water conservation practices in the brick industry. Different frameworks and models have been used in the analysis of the adoption and/or acceptance of new technologies and practices; these include the technology acceptance model, motivational model, theory of reasoned action, innovation diffusion theory, theory of planned behavior, and the unified theory of acceptance and use of technology. However, every model has some limitations, such as not considering environmental/contextual and economic factors that may affect the individual’s intention to perform a behavior. The paper concludes that in comparing other models, the UTAUT seems a better model for understanding the dynamics of acceptance and adoption of water and soil conservation practices.

Keywords: brick kiln, water conservation, soil conservation, unified theory of acceptance and use of technology, technology adoption

Procedia PDF Downloads 90
5704 Efficient Layout-Aware Pretraining for Multimodal Form Understanding

Authors: Armineh Nourbakhsh, Sameena Shah, Carolyn Rose

Abstract:

Layout-aware language models have been used to create multimodal representations for documents that are in image form, achieving relatively high accuracy in document understanding tasks. However, the large number of parameters in the resulting models makes building and using them prohibitive without access to high-performing processing units with large memory capacity. We propose an alternative approach that can create efficient representations without the need for a neural visual backbone. This leads to an 80% reduction in the number of parameters compared to the smallest SOTA model, widely expanding applicability. In addition, our layout embeddings are pre-trained on spatial and visual cues alone and only fused with text embeddings in downstream tasks, which can facilitate applicability to low-resource of multi-lingual domains. Despite using 2.5% of training data, we show competitive performance on two form understanding tasks: semantic labeling and link prediction.

Keywords: layout understanding, form understanding, multimodal document understanding, bias-augmented attention

Procedia PDF Downloads 136
5703 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: customer value, Huff's Gravity Model, POS, Retailer

Procedia PDF Downloads 108
5702 Advancing Urban Sustainability through Data-Driven Machine Learning Solutions

Authors: Nasim Eslamirad, Mahdi Rasoulinezhad, Francesco De Luca, Sadok Ben Yahia, Kimmo Sakari Lylykangas, Francesco Pilla

Abstract:

With the ongoing urbanization, cities face increasing environmental challenges impacting human well-being. To tackle these issues, data-driven approaches in urban analysis have gained prominence, leveraging urban data to promote sustainability. Integrating Machine Learning techniques enables researchers to analyze and predict complex environmental phenomena like Urban Heat Island occurrences in urban areas. This paper demonstrates the implementation of data-driven approach and interpretable Machine Learning algorithms with interpretability techniques to conduct comprehensive data analyses for sustainable urban design. The developed framework and algorithms are demonstrated for Tallinn, Estonia to develop sustainable urban strategies to mitigate urban heat waves. Geospatial data, preprocessed and labeled with UHI levels, are used to train various ML models, with Logistic Regression emerging as the best-performing model based on evaluation metrics to derive a mathematical equation representing the area with UHI or without UHI effects, providing insights into UHI occurrences based on buildings and urban features. The derived formula highlights the importance of building volume, height, area, and shape length to create an urban environment with UHI impact. The data-driven approach and derived equation inform mitigation strategies and sustainable urban development in Tallinn and offer valuable guidance for other locations with varying climates.

Keywords: data-driven approach, machine learning transparent models, interpretable machine learning models, urban heat island effect

Procedia PDF Downloads 18
5701 A Predictive Machine Learning Model of the Survival of Female-led and Co-Led Small and Medium Enterprises in the UK

Authors: Mais Khader, Xingjie Wei

Abstract:

This research sheds light on female entrepreneurs by providing new insights on the survival predictions of companies led by females in the UK. This study aims to build a predictive machine learning model of the survival of female-led & co-led small & medium enterprises (SMEs) in the UK over the period 2000-2020. The predictive model built utilised a combination of financial and non-financial features related to both companies and their directors to predict SMEs' survival. These features were studied in terms of their contribution to the resultant predictive model. Five machine learning models are used in the modelling: Decision tree, AdaBoost, Naïve Bayes, Logistic regression and SVM. The AdaBoost model had the highest performance of the five models, with an accuracy of 73% and an AUC of 80%. The results show high feature importance in predicting companies' survival for company size, management experience, financial performance, industry, region, and females' percentage in management.

Keywords: company survival, entrepreneurship, females, machine learning, SMEs

Procedia PDF Downloads 84
5700 Optimizing Nitrogen Fertilizer Application in Rice Cultivation: A Decision Model for Top and Ear Dressing Dosages

Authors: Ya-Li Tsai

Abstract:

Nitrogen is a vital element crucial for crop growth, significantly influencing crop yield. In rice cultivation, farmers often apply substantial nitrogen fertilizer to maximize yields. However, excessive nitrogen application increases the risk of lodging and pest infestation, leading to yield losses. Additionally, conventional flooded irrigation methods consume significant water resources, necessitating precise agricultural and intelligent water management systems. In this study, it leveraged physiological data and field images captured by unmanned aerial vehicles, considering fertilizer treatment and irrigation as key factors. Statistical models incorporating rice physiological data, yield, and vegetation indices from image data were developed. Missing physiological data were addressed using multiple imputation and regression methods, and regression models were established using principal component analysis and stepwise regression. Target nitrogen accumulation at key growth stages was identified to optimize fertilizer application, with the difference between actual and target nitrogen accumulation guiding recommendations for ear dressing dosage. Field experiments conducted in 2022 validated the recommended ear dressing dosage, demonstrating no significant difference in final yield compared to traditional fertilizer levels under alternate wetting and drying irrigation. These findings highlight the efficacy of applying recommended dosages based on fertilizer decision models, offering the potential for reduced fertilizer use while maintaining yield in rice cultivation.

Keywords: intelligent fertilizer management, nitrogen top and ear dressing fertilizer, rice, yield optimization

Procedia PDF Downloads 44
5699 Energy Communities from Municipality Level to Province Level: A Comparison Using Autoregressive Integrated Moving Average Model

Authors: Amro Issam Hamed Attia Ramadan, Marco Zappatore, Pasquale Balena, Antonella Longo

Abstract:

Considering the energetic crisis that is hitting Europe, it becomes more and more necessary to change the energy policies to depend less on fossil fuels and replace them with energy from renewable sources. This has triggered the urge to use clean energy not only to satisfy energy needs and fulfill the required consumption but also to decrease the danger of climatic changes due to harmful emissions. Many countries have already started creating energetic communities based on renewable energy sources. The first step to understanding energy needs in any place is to perfectly know the consumption. In this work, we aim to estimate electricity consumption for a municipality that makes up part of a rural area located in southern Italy using forecast models that allow for the estimation of electricity consumption for the next ten years, and we then apply the same model to the province where the municipality is located and estimate the future consumption for the same period to examine whether it is possible to start from the municipality level to reach the province level when creating energy communities.

Keywords: ARIMA, electricity consumption, forecasting models, time series

Procedia PDF Downloads 154
5698 Material Parameter Identification of Modified AbdelKarim-Ohno Model

Authors: Martin Cermak, Tomas Karasek, Jaroslav Rojicek

Abstract:

The key role in phenomenological modelling of cyclic plasticity is good understanding of stress-strain behaviour of given material. There are many models describing behaviour of materials using numerous parameters and constants. Combination of individual parameters in those material models significantly determines whether observed and predicted results are in compliance. Parameter identification techniques such as random gradient, genetic algorithm, and sensitivity analysis are used for identification of parameters using numerical modelling and simulation. In this paper genetic algorithm and sensitivity analysis are used to study effect of 4 parameters of modified AbdelKarim-Ohno cyclic plasticity model. Results predicted by Finite Element (FE) simulation are compared with experimental data from biaxial ratcheting test with semi-elliptical loading path.

Keywords: genetic algorithm, sensitivity analysis, inverse approach, finite element method, cyclic plasticity, ratcheting

Procedia PDF Downloads 438
5697 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 148
5696 Fulfillment of Models of Prenatal Care in Adolescents from Mexico and Chile

Authors: Alejandra Sierra, Gloria Valadez, Adriana Dávalos, Mirliana Ramírez

Abstract:

For years, the Pan American Health Organization/World Health Organization and other organizations have made efforts to the improve access and the quality of prenatal care as part of comprehensive programs for maternal and neonatal health, the standards of care have been renewed in order to migrate from a medical perspective to a holistic perspective. However, despite the efforts currently antenatal care models have not been verified by a scientific evaluation in order to determine their effectiveness. The teenage pregnancy is considered as a very important phenomenon since it has been strongly associated with inequalities, poverty and the lack of gender quality; therefore it is important to analyze the antenatal care that’s been given, including not only the clinical intervention but also the activities surrounding the advertising and the health education. In this study, the objective was to describe if the previously established activities (on the prenatal care models) are being performed in the care of pregnant teenagers attending prenatal care in health institutions in two cities in México and Chile during 2013. Methods: Observational and descriptive study, of a transversal cohort. 170 pregnant women (13-19 years) were included in prenatal care in two health institutions (100 women from León-Mexico and 70 from Chile-Coquimbo). Data collection: direct survey, perinatal clinical record card which was used as checklists: WHO antenatal care model WHO-2003, Official Mexican Standard NOM-007-SSA2-1993 and Personalized Service Manual on Reproductive Process- Chile Crece Contigo; for data analysis descriptive statistics were used. The project was approved by the relevant ethics committees. Results: Regarding the fulfillment of interventions focused on physical, gynecological exam, immunizations, monitoring signs and biochemical parameters in both groups was met by more than 84%; the activities of guidance and counseling pregnant teenagers in Leon compliance rates were below 50%, on the other hand, although pregnant women in Coquimbo had a higher percentage of compliance, no one reached 100%. The topics that less was oriented were: family planning, signs and symptoms of complications and labor. Conclusions: Although the coverage of the interventions indicated in the prenatal care models was high, there were still shortcomings in the fulfillment of activities to orientation, education and health promotion. Deficiencies in adherence to prenatal care guidelines could be due to different circumstances such as lack of registration or incomplete filling of medical records, lack of medical supplies or health personnel, absences of people at prenatal check-up appointments, among many others. Therefore, studies are required to evaluate the quality of prenatal care and the effectiveness of existing models, considering the role of the different actors (pregnant women, professionals and health institutions) involved in the functionality and quality of prenatal care models, in order to create strategies to design or improve the application of a complete process of promotion and prevention of maternal and child health as well as sexual and reproductive health in general.

Keywords: adolescent health, health systems, maternal health, primary health care

Procedia PDF Downloads 196
5695 Development of Simple-To-Apply Biogas Kinetic Models for the Co-Digestion of Food Waste and Maize Husk

Authors: Owamah Hilary, O. C. Izinyon

Abstract:

Many existing biogas kinetic models are difficult to apply to substrates they were not developed for, as they are substrate specific. Biodegradability kinetic (BIK) model and maximum biogas production potential and stability assessment (MBPPSA) model were therefore developed in this study for the anaerobic co-digestion of food waste and maize husk. Biodegradability constant (k) was estimated as 0.11d-1 using the BIK model. The results of maximum biogas production potential (A) obtained using the MBPPSA model corresponded well with the results obtained using the popular but complex modified Gompertz model for digesters B-1, B-2, B-3, B-4, and B-5. The (If) value of MBPPSA model also showed that digesters B-3, B-4, and B-5 were stable, while B-1 and B-2 were unstable. Similar stability observation was also obtained using the modified Gompertz model. The MBPPSA model can therefore be used as alternative model for anaerobic digestion feasibility studies and plant design.

Keywords: biogas, inoculum, model development, stability assessment

Procedia PDF Downloads 411
5694 An Approach to Low Velocity Impact Damage Modelling of Variable Stiffness Curved Composite Plates

Authors: Buddhi Arachchige, Hessam Ghasemnejad

Abstract:

In this study, the post impact behavior of curved composite plates subjected to low velocity impact was studied analytically and numerically. Approaches to damage modelling are proposed through the degradation of stiffness in the damaged region by reduction of thickness in the damage region. Spring-mass models were used to model the impact response of the plate and impactor. The study involved designing two damage models to compare and contrast the model best fitted with the numerical results. The theoretical force-time responses were compared with the numerical results obtained through a detailed study carried out in LS-DYNA. The modified damage model established a good prediction with the analytical force-time response for different layups and geometry. This study provides a gateway in selecting the most effective layups for variable stiffness curved composite panels able to withstand a higher impact damage.

Keywords: analytical modelling, composite damage, impact, variable stiffness

Procedia PDF Downloads 266
5693 Comparison of Different Machine Learning Models for Time-Series Based Load Forecasting of Electric Vehicle Charging Stations

Authors: H. J. Joshi, Satyajeet Patil, Parth Dandavate, Mihir Kulkarni, Harshita Agrawal

Abstract:

As the world looks towards a sustainable future, electric vehicles have become increasingly popular. Millions worldwide are looking to switch to Electric cars over the previously favored combustion engine-powered cars. This demand has seen an increase in Electric Vehicle Charging Stations. The big challenge is that the randomness of electrical energy makes it tough for these charging stations to provide an adequate amount of energy over a specific amount of time. Thus, it has become increasingly crucial to model these patterns and forecast the energy needs of power stations. This paper aims to analyze how different machine learning models perform on Electric Vehicle charging time-series data. The data set consists of authentic Electric Vehicle Data from the Netherlands. It has an overview of ten thousand transactions from public stations operated by EVnetNL.

Keywords: forecasting, smart grid, electric vehicle load forecasting, machine learning, time series forecasting

Procedia PDF Downloads 90
5692 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 141
5691 PM₁₀ and PM2.5 Concentrations in Bangkok over Last 10 Years: Implications for Air Quality and Health

Authors: Tin Thongthammachart, Wanida Jinsart

Abstract:

Atmospheric particulate matter particles with a diameter less than 10 microns (PM₁₀) and less than 2.5 microns (PM₂.₅) have adverse health effect. The impact from PM was studied from both health and regulatory perspective. Ambient PM data was collected over ten years in Bangkok and vicinity areas of Thailand from 2007 to 2017. Statistical models were used to forecast PM concentrations from 2018 to 2020. Monitoring monthly data averaged concentration of PM₁₀ and PM₂.₅ were used as input to forecast the monthly average concentration of PM. The forecasting results were validated by root means square error (RMSE). The predicted results were used to determine hazard risk for the carcinogenic disease. The health risk values were interpolated with GIS with ordinary kriging technique to create hazard maps in Bangkok and vicinity area. GIS-based maps illustrated the variability of PM distribution and high-risk locations. These evaluated results could support national policy for the sake of human health.

Keywords: PM₁₀, PM₂.₅, statistical models, atmospheric particulate matter

Procedia PDF Downloads 149
5690 Validating the Micro-Dynamic Rule in Opinion Dynamics Models

Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.

Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule

Procedia PDF Downloads 143
5689 Maximum Power Point Tracking Based on Estimated Power for PV Energy Conversion System

Authors: Zainab Almukhtar, Adel Merabet

Abstract:

In this paper, a method for maximum power point tracking of a photovoltaic energy conversion system is presented. This method is based on using the difference between the power from the solar panel and an estimated power value to control the DC-DC converter of the photovoltaic system. The difference is continuously compared with a preset error permitted value. If the power difference is more than the error, the estimated power is multiplied by a factor and the operation is repeated until the difference is less or equal to the threshold error. The difference in power will be used to trigger a DC-DC boost converter in order to raise the voltage to where the maximum power point is achieved. The proposed method was experimentally verified through a PV energy conversion system driven by the OPAL-RT real time controller. The method was tested on varying radiation conditions and load requirements, and the Photovoltaic Panel was operated at its maximum power in different conditions of irradiation.

Keywords: control system, error, solar panel, MPPT tracking

Procedia PDF Downloads 261
5688 Two-Dimensional Nanostack Based On Chip Wiring

Authors: Nikhil Jain, Bin Yu

Abstract:

The material behavior of graphene, a single layer of carbon lattice, is extremely sensitive to its dielectric environment. We demonstrate improvement in electronic performance of graphene nanowire interconnects with full encapsulation by lattice-matching, chemically inert, 2D layered insulator hexagonal boron nitride (h-BN). A novel layer-based transfer technique is developed to construct the h-BN/MLG/h-BN heterostructures. The encapsulated graphene wires are characterized and compared with that on SiO2 or h-BN substrate without passivating h-BN layer. Significant improvements in maximum current-carrying density, breakdown threshold, and power density in encapsulated graphene wires are observed. These critical improvements are achieved without compromising the carrier transport characteristics in graphene. Furthermore, graphene wires exhibit electrical behavior less insensitive to ambient conditions, as compared with the non-passivated ones. Overall, h-BN/graphene/h-BN heterostructure presents a robust material platform towards the implementation of high-speed carbon-based interconnects.

Keywords: two-dimensional nanosheet, graphene, hexagonal boron nitride, heterostructure, interconnects

Procedia PDF Downloads 441
5687 The Prognostic Prediction Value of Positive Lymph Nodes Numbers for the Hypopharyngeal Squamous Cell Carcinoma

Authors: Wendu Pang, Yaxin Luo, Junhong Li, Yu Zhao, Danni Cheng, Yufang Rao, Minzi Mao, Ke Qiu, Yijun Dong, Fei Chen, Jun Liu, Jian Zou, Haiyang Wang, Wei Xu, Jianjun Ren

Abstract:

We aimed to compare the prognostic prediction value of positive lymph node number (PLNN) to the American Joint Committee on Cancer (AJCC) tumor, lymph node, and metastasis (TNM) staging system for patients with hypopharyngeal squamous cell carcinoma (HPSCC). A total of 826 patients with HPSCC from the Surveillance, Epidemiology, and End Results database (2004–2015) were identified and split into two independent cohorts: training (n=461) and validation (n=365). Univariate and multivariate Cox regression analyses were used to evaluate the prognostic effects of PLNN in patients with HPSCC. We further applied six Cox regression models to compare the survival predictive values of the PLNN and AJCC TNM staging system. PLNN showed a significant association with overall survival (OS) and cancer-specific survival (CSS) (P < 0.001) in both univariate and multivariable analyses, and was divided into three groups (PLNN 0, PLNN 1-5, and PLNN>5). In the training cohort, multivariate analysis revealed that the increased PLNN of HPSCC gave rise to significantly poor OS and CSS after adjusting for age, sex, tumor size, and cancer stage; this trend was also verified by the validation cohort. Additionally, the survival model incorporating a composite of PLNN and TNM classification (C-index, 0.705, 0.734) performed better than the PLNN and AJCC TNM models. PLNN can serve as a powerful survival predictor for patients with HPSCC and is a surrogate supplement for cancer staging systems.

Keywords: hypopharyngeal squamous cell carcinoma, positive lymph nodes number, prognosis, prediction models, survival predictive values

Procedia PDF Downloads 138
5686 Model Averaging in a Multiplicative Heteroscedastic Model

Authors: Alan Wan

Abstract:

In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.

Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk

Procedia PDF Downloads 360
5685 Competency Model as a Key Tool for Managing People in Organizations: Presentation of a Model

Authors: Andrea ČopíKová

Abstract:

Competency Based Management is a new approach to management, which solves organization’s challenges with complexity and with the aim to find and solve organization’s problems and learn how to avoid these in future. They teach the organizations to create, apart from the state of stability – that is temporary, vital organization, which is permanently able to utilize and profit from internal and external opportunities. The aim of this paper is to propose a process of competency model design, based on which a competency model for a financial department manager in a production company will be created. Competency models are very useful tool in many personnel processes in any organization. They are used for acquiring and selection of employees, designing training and development activities, employees’ evaluation, and they can be used as a guide for a career planning and as a tool for succession planning especially for managerial positions. When creating a competency model the method AHP (Analytic Hierarchy Process) and quantitative pair-wise comparison (Saaty’s method) will be used; these methods belong among the most used methods for the determination of weights, and it is used in the AHP procedure. The introduction part of the paper consists of the research results pertaining to the use of competency model in practice and then the issue of competency and competency models is explained. The application part describes in detail proposed methodology for the creation of competency models, based on which the competency model for the position of financial department manager in a foreign manufacturing company, will be created. In the conclusion of the paper, the final competency model will be shown for above mentioned position. The competency model divides selected competencies into three groups that are managerial, interpersonal and functional. The model describes in detail individual levels of competencies, their target value (required level) and the level of importance.

Keywords: analytic hierarchy process, competency, competency model, quantitative pairwise comparison

Procedia PDF Downloads 230
5684 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System

Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu

Abstract:

In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.

Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission

Procedia PDF Downloads 130
5683 Application of Artificial Neural Network in Initiating Cleaning Of Photovoltaic Solar Panels

Authors: Mohamed Mokhtar, Mostafa F. Shaaban

Abstract:

Among the challenges facing solar photovoltaic (PV) systems in the United Arab Emirates (UAE), dust accumulation on solar panels is considered the most severe problem that faces the growth of solar power plants. The accumulation of dust on the solar panels significantly degrades output from these panels. Hence, solar PV panels have to be cleaned manually or using costly automated cleaning methods. This paper focuses on initiating cleaning actions when required to reduce maintenance costs. The cleaning actions are triggered only when the dust level exceeds a threshold value. The amount of dust accumulated on the PV panels is estimated using an artificial neural network (ANN). Experiments are conducted to collect the required data, which are used in the training of the ANN model. Then, this ANN model will be fed by the output power from solar panels, ambient temperature, and solar irradiance, and thus, it will be able to estimate the amount of dust accumulated on solar panels at these conditions. The model was tested on different case studies to confirm the accuracy of the developed model.

Keywords: machine learning, dust, PV panels, renewable energy

Procedia PDF Downloads 128
5682 Forecasting Stock Prices Based on the Residual Income Valuation Model: Evidence from a Time-Series Approach

Authors: Chen-Yin Kuo, Yung-Hsin Lee

Abstract:

Previous studies applying residual income valuation (RIV) model generally use panel data and single-equation model to forecast stock prices. Unlike these, this paper uses Taiwan longitudinal data to estimate multi-equation time-series models such as Vector Autoregressive (VAR), Vector Error Correction Model (VECM), and conduct out-of-sample forecasting. Further, this work assesses their forecasting performance by two instruments. In favor of extant research, the major finding shows that VECM outperforms other three models in forecasting for three stock sectors over entire horizons. It implies that an error correction term containing long-run information contributes to improve forecasting accuracy. Moreover, the pattern of composite shows that at longer horizon, VECM produces the greater reduction in errors, and performs substantially better than VAR.

Keywords: residual income valuation model, vector error correction model, out of sample forecasting, forecasting accuracy

Procedia PDF Downloads 305
5681 Estimation of Noise Barriers for Arterial Roads of Delhi

Authors: Sourabh Jain, Parul Madan

Abstract:

Traffic noise pollution has become a challenging problem for all metro cities of India due to rapid urbanization, growing population and rising number of vehicles and transport development. In Delhi the prime source of noise pollution is vehicular traffic. In Delhi it is found that the ambient noise level (Leq) is exceeding the standard permissible value at all the locations. Noise barriers or enclosures are definitely useful in obtaining effective deduction of traffic noise disturbances in urbanized areas. US’s Federal Highway Administration Model (FHWA) and Calculation of Road Traffic Noise (CORTN) of UK are used to develop spread sheets for noise prediction. Spread sheets are also developed for evaluating effectiveness of existing boundary walls abutting houses in mitigating noise, redesigning them as noise barriers. Study was also carried out to examine the changes in noise level due to designed noise barrier by using both models FHWA and CORTN respectively. During the collection of various data it is found that receivers are located far away from road at Rithala and Moolchand sites and hence extra barrier height needed to meet prescribed limits was less as seen from calculations and most of the noise diminishes by propagation effect.On the basis of overall study and data analysis, it is concluded that FHWA and CORTN models under estimate noise levels. FHWA model predicted noise levels with an average percentage error of -7.33 and CORTN predicted with an average percentage error of -8.5. It was observed that at all sites noise levels at receivers were exceeding the standard limit of 55 dB. It was seen from calculations that existing walls are reducing noise levels. Average noise reduction due to walls at Rithala was 7.41 dB and at Panchsheel was 7.20 dB and lower amount of noise reduction was observed at Friend colony which was only 5.88. It was observed from analysis that Friends colony sites need much greater height of barrier. This was because of residential buildings abutting the road. At friends colony great amount of traffic was observed since it is national highway. At this site diminishing of noise due to propagation effect was very less.As FHWA and CORTN models were developed in excel programme, it eliminates laborious calculations of noise. There was no reflection correction in FHWA models as like in CORTN model.

Keywords: IFHWA, CORTN, Noise Sources, Noise Barriers

Procedia PDF Downloads 123
5680 Logistics Model for Improving Quality in Railway Transport

Authors: Eva Nedeliakova, Juraj Camaj, Jaroslav Masek

Abstract:

This contribution is focused on the methodology for identifying levels of quality and improving quality through new logistics model in railway transport. It is oriented on the application of dynamic quality models, which represent an innovative method of evaluation quality services. Through this conception, time factor, expected, and perceived quality in each moment of the transportation process within logistics chain can be taken into account. Various models describe the improvement of the quality which emphases the time factor throughout the whole transportation logistics chain. Quality of services in railway transport can be determined by the existing level of service quality, by detecting the causes of dissatisfaction employees but also customers, to uncover strengths and weaknesses. This new logistics model is able to recognize critical processes in logistic chain. It includes service quality rating that must respect its specific properties, which are unrepeatability, impalpability, their use right at the time they are provided and particularly changeability, which is significant factor in the conditions of rail transport as well. These peculiarities influence the quality of service regarding the constantly increasing requirements and that result in new ways of finding progressive attitudes towards the service quality rating.

Keywords: logistics model, quality, railway transport

Procedia PDF Downloads 550
5679 Improved Rare Species Identification Using Focal Loss Based Deep Learning Models

Authors: Chad Goldsworthy, B. Rajeswari Matam

Abstract:

The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.

Keywords: convolutional neural networks, data imbalance, deep learning, focal loss, species classification, wildlife conservation

Procedia PDF Downloads 172