Search results for: estimation algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3842

Search results for: estimation algorithms

692 Temperature Effect on Changing of Electrical Impedance and Permittivity of Ouargla (Algeria) Dunes Sand at Different Frequencies

Authors: Naamane Remita, Mohammed laïd Mechri, Nouredine Zekri, Smaïl Chihi

Abstract:

The goal of this study is the estimation real and imaginary components of both electrical impedance and permittivity z', z'' and ε', ε'' respectively, in Ouargla dunes sand at different temperatures and different frequencies, with alternating current (AC) equal to 1 volt, using the impedance spectroscopy (IS). This method is simple and non-destructive. the results can frequently be correlated with a number of physical properties, dielectric properties and the impacts of the composition on the electrical conductivity of solids. The experimental results revealed that the real part of impedance is higher at higher temperature in the lower frequency region and gradually decreases with increasing frequency. As for the high frequencies, all the values of the real part of the impedance were positive. But at low frequency the values of the imaginary part were positive at all temperatures except for 1200 degrees which were negative. As for the medium frequencies, the reactance values were negative at temperatures 25, 400, 200 and 600 degrees, and then became positive at the rest of the temperatures. At high frequencies of the order of MHz, the values of the imaginary part of the electrical impedance were in contrast to what we recorded for the middle frequencies. The results showed that the electrical permittivity decreases with increasing frequency, at low frequency we recorded permittivity values of 10+ 11, and at medium frequencies it was 10+ 07, while at high frequencies it was 10+ 02. The values of the real part of the electrical permittivity were taken large values at the temperatures of 200 and 600 degrees Celsius and at the lowest frequency, while the smallest value for the permittivity was recorded at the temperature of 400 degrees Celsius at the highest frequency. The results showed that there are large values of the imaginary part of the electrical permittivity at the lowest frequency and then it starts decreasing as the latter increases (the higher the frequency the lower the values of the imaginary part of the electrical permittivity). The character of electrical impedance variation indicated an opportunity to realize the polarization of Ouargla dunes sand and acquaintance if this compound consumes or produces energy. It’s also possible to know the satisfactory of equivalent electric circuit, whether it’s miles induction or capacitance.

Keywords: electrical impedance, electrical permittivity, temperature, impedance spectroscopy, dunes sand ouargla

Procedia PDF Downloads 48
691 Rainfall Estimation over Northern Tunisia by Combining Meteosat Second Generation Cloud Top Temperature and Tropical Rainfall Measuring Mission Microwave Imager Rain Rates

Authors: Saoussen Dhib, Chris M. Mannaerts, Zoubeida Bargaoui, Ben H. P. Maathuis, Petra Budde

Abstract:

In this study, a new method to delineate rain areas in northern Tunisia is presented. The proposed approach is based on the blending of the geostationary Meteosat Second Generation (MSG) infrared channel (IR) with the low-earth orbiting passive Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). To blend this two products, we need to apply two main steps. Firstly, we have to identify the rainy pixels. This step is achieved based on a classification using MSG channel IR 10.8 and the water vapor WV 0.62, applying a threshold on the temperature difference of less than 11 Kelvin which is an approximation of the clouds that have a high likelihood of precipitation. The second step consists on fitting the relation between IR cloud top temperature with the TMI rain rates. The correlation coefficient of these two variables has a negative tendency, meaning that with decreasing temperature there is an increase in rainfall intensity. The fitting equation will be applied for the whole day of MSG 15 minutes interval images which will be summed. To validate this combined product, daily extreme rainfall events occurred during the period 2007-2009 were selected, using a threshold criterion for large rainfall depth (> 50 mm/day) occurring at least at one rainfall station. Inverse distance interpolation method was applied to generate rainfall maps for the drier summer season (from May to October) and the wet winter season (from November to April). The evaluation results of the estimated rainfall combining MSG and TMI was very encouraging where all the events were detected rainy and the correlation coefficients were much better than previous evaluated products over the study area such as MSGMPE and PERSIANN products. The combined product showed a better performance during wet season. We notice also an overestimation of the maximal estimated rain for many events.

Keywords: combination, extreme, rainfall, TMI-MSG, Tunisia

Procedia PDF Downloads 176
690 Modelling Causal Effects from Complex Longitudinal Data via Point Effects of Treatments

Authors: Xiaoqin Wang, Li Yin

Abstract:

Background and purpose: In many practices, one estimates causal effects arising from a complex stochastic process, where a sequence of treatments are assigned to influence a certain outcome of interest, and there exist time-dependent covariates between treatments. When covariates are plentiful and/or continuous, statistical modeling is needed to reduce the huge dimensionality of the problem and allow for the estimation of causal effects. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to conduct the modeling via point effects. The purpose of the work is to study the modeling of these causal effects via point effects. Challenges and solutions: The time-dependent covariates often have influences from earlier treatments as well as on subsequent treatments. Consequently, the standard parameters – i.e., the mean of the outcome given all treatments and covariates-- are essentially all different (null paradox). Furthermore, the dimension of the parameters is huge (curse of dimensionality). Therefore, it can be difficult to conduct the modeling in terms of standard parameters. Instead of standard parameters, we have use point effects of treatments to develop likelihood-based parametric approach to the modeling of these causal effects and are able to model the causal effects of a sequence of treatments by modeling a small number of point effects of individual treatment Achievements: We are able to conduct the modeling of the causal effects from a sequence of treatments in the familiar framework of single-point causal inference. The simulation shows that our method achieves not only an unbiased estimate for the causal effect but also the nominal level of type I error and a low level of type II error for the hypothesis testing. We have applied this method to a longitudinal study of COVID-19 mortality among Scandinavian countries and found that the Swedish approach performed far worse than the other countries' approach for COVID-19 mortality and the poor performance was largely due to its early measure during the initial period of the pandemic.

Keywords: causal effect, point effect, statistical modelling, sequential causal inference

Procedia PDF Downloads 205
689 Metagenomics-Based Molecular Epidemiology of Viral Diseases

Authors: Vyacheslav Furtak, Merja Roivainen, Olga Mirochnichenko, Majid Laassri, Bella Bidzhieva, Tatiana Zagorodnyaya, Vladimir Chizhikov, Konstantin Chumakov

Abstract:

Molecular epidemiology and environmental surveillance are parts of a rational strategy to control infectious diseases. They have been widely used in the worldwide campaign to eradicate poliomyelitis, which otherwise would be complicated by the inability to rapidly respond to outbreaks and determine sources of the infection. The conventional scheme involves isolation of viruses from patients and the environment, followed by their identification by nucleotide sequences analysis to determine phylogenetic relationships. This is a tedious and time-consuming process that yields definitive results when it may be too late to implement countermeasures. Because of the difficulty of high-throughput full-genome sequencing, most such studies are conducted by sequencing only capsid genes or their parts. Therefore the important information about the contribution of other parts of the genome and inter- and intra-species recombination to viral evolution is not captured. Here we propose a new approach based on the rapid concentration of sewage samples with tangential flow filtration followed by deep sequencing and reconstruction of nucleotide sequences of viruses present in the samples. The entire nucleic acids content of each sample is sequenced, thus preserving in digital format the complete spectrum of viruses. A set of rapid algorithms was developed to separate deep sequence reads into discrete populations corresponding to each virus and assemble them into full-length consensus contigs, as well as to generate a complete profile of sequence heterogeneities in each of them. This provides an effective approach to study molecular epidemiology and evolution of natural viral populations.

Keywords: poliovirus, eradication, environmental surveillance, laboratory diagnosis

Procedia PDF Downloads 281
688 Hardware-In-The-Loop Relative Motion Control: Theory, Simulation and Experimentation

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper presents a Guidance and Control (G&C) strategy to address spacecraft maneuvering problem for future Rendezvous and Docking (RVD) missions. The proposed strategy allows safe and propellant efficient trajectories for space servicing missions including tasks such as approaching, inspecting and capturing. This work provides the validation test results of the G&C laws using a Hardware-In-the-Loop (HIL) setup with two robotic mockups representing the chaser and the target spacecraft. Through this paper, the challenges of the relative motion control in space are first summarized, and in particular, the constraints imposed by the mission, spacecraft and, onboard processing capabilities. Second, the proposed algorithm is introduced by presenting the formulation of constrained Model Predictive Control (MPC) to optimize the fuel consumption and explicitly handle the physical and geometric constraints in the system, e.g. thruster or Line-Of-Sight (LOS) constraints. Additionally, the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description and accordingly explained. The resulting convex optimization problem allows real-time implementation capability based on a detailed discussion on the computational time requirements and the obtained results with respect to the onboard computer and future trends of space processors capabilities. Finally, the performance of the algorithm is presented in the scope of a potential future mission and of the available equipment. The results also cover a comparison between the proposed algorithms with Linear–quadratic regulator (LQR) based control law to highlight the clear advantages of the MPC formulation.

Keywords: autonomous vehicles, embedded optimization, real-time experiment, rendezvous and docking, space robotics

Procedia PDF Downloads 124
687 A Study on the Korean Connected Industrial Parks Smart Logistics It Financial Enterprise Architecture

Authors: Ilgoun Kim, Jongpil Jeong

Abstract:

Recently, a connected industrial parks (CIPs) architecture using new technologies such as RFID, cloud computing, CPS, Big Data, 5G 5G, IIOT, VR-AR, and ventral AI algorithms based on IoT has been proposed. This researcher noted the vehicle junction problem (VJP) as a more specific detail of the CIPs architectural models. The VJP noted by this researcher includes 'efficient AI physical connection challenges for vehicles' through ventilation, 'financial and financial issues with complex vehicle physical connections,' and 'welfare and working conditions of the performing personnel involved in complex vehicle physical connections.' In this paper, we propose a public solution architecture for the 'electronic financial problem of complex vehicle physical connections' as a detailed task during the vehicle junction problem (VJP). The researcher sought solutions to businesses, consumers, and Korean social problems through technological advancement. We studied how the beneficiaries of technological development can benefit from technological development with many consumers in Korean society and many small and small Korean company managers, not some specific companies. In order to more specifically implement the connected industrial parks (CIPs) architecture using the new technology, we noted the vehicle junction problem (VJP) within the smart factory industrial complex and noted the process of achieving the vehicle junction problem performance among several electronic processes. This researcher proposes a more detailed, integrated public finance enterprise architecture among the overall CIPs architectures. The main details of the public integrated financial enterprise architecture were largely organized into four main categories: 'business', 'data', 'technique', and 'finance'.

Keywords: enterprise architecture, IT Finance, smart logistics, CIPs

Procedia PDF Downloads 167
686 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems

Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer

Abstract:

This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.

Keywords: cascade control, multi-Loop control systems, multiobjective optimization, optimal control

Procedia PDF Downloads 153
685 Geomorphometric Analysis of the Hydrologic and Topographic Parameters of the Katsina-Ala Drainage Basin, Benue State, Nigeria

Authors: Oyatayo Kehinde Taofik, Ndabula Christopher

Abstract:

Drainage basins are a central theme in the green economy. The rising challenges in flooding, erosion or sediment transport and sedimentation threaten the green economy. This has led to increasing emphasis on quantitative analysis of drainage basin parameters for better understanding, estimation and prediction of fluvial responses and, thus associated hazards or disasters. This can be achieved through direct measurement, characterization, parameterization, or modeling. This study applied the Remote Sensing and Geographic Information System approach of parameterization and characterization of the morphometric variables of Katsina – Ala basin using a 30 m resolution Shuttle Radar Topographic Mission (SRTM) Digital Elevation Model (DEM). This was complemented with topographic and hydrological maps of Katsina-Ala on a scale of 1:50,000. Linear, areal and relief parameters were characterized. The result of the study shows that Ala and Udene sub-watersheds are 4th and 5th order basins, respectively. The stream network shows a dendritic pattern, indicating homogeneity in texture and a lack of structural control in the study area. Ala and Udene sub-watersheds have the following values for elongation ratio, circularity ratio, form factor and relief ratio: 0.48 / 0.39 / 0.35/ 9.97 and 0.40 / 0.35 / 0.32 / 6.0. They also have the following values for drainage texture and ruggedness index of 0.86 / 0.011 and 1.57 / 0.016. The study concludes that the two sub-watersheds are elongated, suggesting that they are susceptible to erosion and, thus higher sediment load in the river channels, which will dispose the watersheds to higher flood peaks. The study also concludes that the sub-watersheds have a very coarse texture, with good permeability of subsurface materials and infiltration capacity, which significantly recharge the groundwater. The study recommends that efforts should be put in place by the Local and State Governments to reduce the size of paved surfaces in these sub-watersheds by implementing a robust agroforestry program at the grass root level.

Keywords: erosion, flood, mitigation, morphometry, watershed

Procedia PDF Downloads 87
684 Paraplegic Dimensions of Asymmetric Warfare: A Strategic Analysis for Resilience Policy Plan

Authors: Sehrish Qayyum

Abstract:

In this age of constant technology, asymmetrical warfare could not be won. Attuned psychometric study confirms that screaming sometimes is more productive than active retaliation against strong adversaries. Asymmetric warfare is a game of nerves and thoughts with least vigorous participation for large anticipated losses. It creates the condition of paraplegia with partial but permanent immobility, which effects the core warfare operations, being screams rather than active retaliation. When one’s own power is doubted, it gives power to one’s own doubt to ruin all planning either done with superlative cost-benefit analysis. Strategically calculated estimation of asymmetric warfare since the early WWI to WWII, WWII-to Cold War, and then to the current era in three chronological periods exposits that courage makes nations win the battle of warriors to battle of comrades. Asymmetric warfare has been most difficult to fight and survive due to unexpectedness and being lethal despite preparations. Thoughts before action may be the best-assumed strategy to mix Regional Security Complex Theory and OODA loop to develop the Paraplegic Resilience Policy Plan (PRPP) to win asymmetric warfare. PRPP may serve to control and halt the ongoing wave of terrorism, guerilla warfare, and insurgencies, etc. PRPP, along with a strategic work plan, is based on psychometric analysis to deal with any possible war condition and tactic to save millions of innocent lives such that lost in Christchurch New Zealand in 2019, November 2015 Paris attacks, and Berlin market attacks in 2016, etc. Getting tangled into self-imposed epistemic dilemmas results in regret that becomes the only option of performance. It is a descriptive psychometric analysis of war conditions with generic application of probability tests to find the best possible options and conditions to develop PRPP for any adverse condition possible so far. Innovation in technology begets innovation in planning and action-plan to serve as a rheostat approach to deal with asymmetric warfare.

Keywords: asymmetric warfare, psychometric analysis, PRPP, security

Procedia PDF Downloads 136
683 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines

Procedia PDF Downloads 103
682 Writing a Parametric Design Algorithm Based on Recreation and Structural Analysis of Patkane Model: The Case Study of Oshtorjan Mosque

Authors: Behnoush Moghiminia, Jesus Anaya Diaz

Abstract:

The current study attempts to present the relationship between the structure development and Patkaneh as one of the Iranian geometric patterns and parametric algorithms by introducing two practical methods. While having a structural function, Patkaneh is also used as an ornamental element. It can be helpful in the scientific and practical review of Patkaneh. The current study aims to use Patkaneh as a parametric form generator based on the algorithm. The current paper attempts to express how can a more complete algorithm of this covering be obtained based on the parametric study and analysis of a sample of a Patkaneh and also investigate the relationship between the development of the geometrical pattern of Patkaneh as a structural-decorative element of Iranian architecture and digital design. In this regard, to achieve the research purposes, researchers investigated the oldest type of Patkaneh in the architecture history of Iran, such as the Northern Entrance Patkaneh of Oshtorjan Jame’ Mosque. An accurate investigation was done on the history of the background to answer the questions. Then, by investigating the structural behavior of Patkaneh, the decorative or structural-decorative role of Patkaneh was investigated to eliminate the ambiguity. Then, the geometrical structure of Patkaneh was analyzed by introducing two practical methods. The first method is based on the constituent units of Patkaneh (Square and diamond) and investigating the interactive relationships between them in 2D and 3D. This method is appropriate for cases where there are rational and regular geometrical relationships. The second method is based on the separation of the floors and the investigation of their interrelation. It is practical when the constituent units are not geometrically regular and have numerous diversity. Finally, the parametric form algorithm of these methods was codified.

Keywords: geometric properties, parametric design, Patkaneh, structural analysis

Procedia PDF Downloads 151
681 Technical Efficiency in Organic and Conventional Wheat Farms: Evidence from a Primary Survey from Two Districts of Ganga River Basin, India

Authors: S. P. Singh, Priya, Komal Sajwan

Abstract:

With the increasing spread of organic farming in India, costs, returns, efficiency, and social and environmental sustainability of organic vis-a-vis conventional farming systems have become topics of interest among agriculture scientists, economists, and policy analysts. A study on technical efficiency estimation under these farming systems, particularly in the Ganga River Basin, where the promotion of organic farming is incentivized, can help to understand whether the inputs are utilized to their maximum possible level and what measures can be taken to improve the efficiency. This paper, therefore, analyses the technical efficiency of wheat farms operating under organic and conventional farming systems. The study is based on a primary survey of 600 farms (300 organic ad 300 conventional) conducted in 2021 in two districts located in the Middle Ganga River Basin, India. Technical, managerial, and scale efficiencies of individual farms are estimated by applying the data envelopment analysis (DEA) methodology. The per hectare value of wheat production is taken as an output variable, and values of seeds, human labour, machine cost, plant nutrients, farm yard manure (FYM), plant protection, and irrigation charges are considered input variables for estimating the farm-level efficiencies. The post-DEA analysis is conducted using the Tobit regression model to know the efficiency determining factors. The results show that technical efficiency is significantly higher in conventional than organic farming systems due to a higher gap in scale efficiency than managerial efficiency. Further, 9.8% conventional and only 1.0% organic farms are found operating at the most productive scale size (MPSS), and 99% organic and 81% conventional farms at IRS. Organic farms perform well in managerial efficiency, but their technical efficiency is lower than conventional farms, mainly due to their relatively lower scale size. The paper suggests that technical efficiency in organic wheat can be increased by upscaling the farm size by incentivizing group/collective farming in clusters.

Keywords: organic, conventional, technical efficiency, determinants, DEA, Tobit regression

Procedia PDF Downloads 99
680 Case Study: The Analysis of Maturity of West Buru Basin and the Potential Development of Geothermal in West Buru Island

Authors: Kefi Rahmadio, Filipus Armando Ginting, Richard Nainggolan

Abstract:

This research shows the formation of the West Buru Basin and the potential utilization of this West Buru Basin as a geothermal potential. The research area is West Buru Island which is part of the West Buru Basin. The island is located in Maluku Province, with its capital city named Namlea. The island is divided into 10 districts, namely District Kepalamadan, Airbuaya District, Wapelau District, Namlea District, Waeapo District, Batabual District, Namrole District, Waesama District, Leksula District, and Ambalau District. The formation in this basin is Permian-Quarter. They start from the Formation Ghegan, Dalan Formation, Mefa Formation, Kuma Formation, Waeken Formation, Wakatin Formation, Ftau Formation and Leko Formation. These formations are composing this West Buru Basin. Determination of prospect area in the geothermal area with preliminary investigation stage through observation of manifestation, topographic shape and structure are found around prospect area. This is done because there is no data of earth that support the determination of prospect area more accurately. In Waepo area, electric power generated based on field observation and structural analysis, geothermal area of ​Waeapo was approximately 6 km², with reference to the SNI 'Classification of Geothermal Potential' (No.03-5012-1999), an area of ​​1 km² is assumed to be 12.5 MWe. The speculative potential of this area is (Q) = 6 x 12.5 MWe = 75 MWe. In the Bata Bual area, the geothermal prospect projected 4 km², the speculative potential of the Bata Bual area is worth (Q) = 4 x 12.5 MWe = 50 MWe. In Kepala Madan area, based on the estimation of manifestation area, there is a wide area of ​​prospect in Kepala Madan area about 4 km². The geothermal energy potential of the speculative level in Kepala Madan district is (Q) = 4 x 12.5 MWe = 50 MWe. These three areas are the largest geothermal potential on the island of West Buru. From the above research, it can be concluded that there is potential in West Buru Island. Further exploration is needed to find greater potential. Therefore, researchers want to explain the geothermal potential contained in the West Buru Basin, within the scope of West Buru Island. This potential can be utilized for the community of West Buru Island.

Keywords: West Buru basin, West Buru island, potential, Waepo, Bata Bual, Kepala Madan

Procedia PDF Downloads 226
679 Rating Agreement: Machine Learning for Environmental, Social, and Governance Disclosure

Authors: Nico Rosamilia

Abstract:

The study evaluates the importance of non-financial disclosure practices for regulators, investors, businesses, and markets. It aims to create a sector-specific set of indicators for environmental, social, and governance (ESG) performances alternative to the ratings of the agencies. The existing literature extensively studies the implementation of ESG rating systems. Conversely, this study has a twofold outcome. Firstly, it should generalize incentive systems and governance policies for ESG and sustainable principles. Therefore, it should contribute to the EU Sustainable Finance Disclosure Regulation. Secondly, it concerns the market and the investors by highlighting successful sustainable investing. Indeed, the study contemplates the effect of ESG adoption practices on corporate value. The research explores the asset pricing angle in order to shed light on the fragmented argument on the finance of ESG. Investors may be misguided about the positive or negative effects of ESG on performances. The paper proposes a different method to evaluate ESG performances. By comparing the results of a traditional econometric approach (Lasso) with a machine learning algorithm (Random Forest), the study establishes a set of indicators for ESG performance. Therefore, the research also empirically contributes to the theoretical strands of literature regarding model selection and variable importance in a finance framework. The algorithms will spit out sector-specific indicators. This set of indicators defines an alternative to the compounded scores of ESG rating agencies and avoids the possible offsetting effect of scores. With this approach, the paper defines a sector-specific set of indicators to standardize ESG disclosure. Additionally, it tries to shed light on the absence of a clear understanding of the direction of the ESG effect on corporate value (the problem of endogeneity).

Keywords: ESG ratings, non-financial information, value of firms, sustainable finance

Procedia PDF Downloads 84
678 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data

Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali

Abstract:

The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.

Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors

Procedia PDF Downloads 69
677 Multi-Stage Optimization of Local Environmental Quality by Comprehensive Computer Simulated Person as Sensor for Air Conditioning Control

Authors: Sung-Jun Yoo, Kazuhide Ito

Abstract:

In this study, a comprehensive computer simulated person (CSP) that integrates computational human model (virtual manikin) and respiratory tract model (virtual airway), was applied for estimation of indoor environmental quality. Moreover, an inclusive prediction method was established by integrating computational fluid dynamics (CFD) analysis with advanced CSP which is combined with physiologically-based pharmacokinetic (PBPK) model, unsteady thermoregulation model for analysis targeting micro-climate around human body and respiratory area with high accuracy. This comprehensive method can estimate not only the contaminant inhalation but also constant interaction in the contaminant transfer between indoor spaces, i.e., a target area for indoor air quality (IAQ) assessment, and respiratory zone for health risk assessment. This study focused on the usage of the CSP as an air/thermal quality sensor in indoors, which means the application of comprehensive model for assessment of IAQ and thermal environmental quality. Demonstrative analysis was performed in order to examine the applicability of the comprehensive model to the heating, ventilation, air conditioning (HVAC) control scheme. CSP was located at the center of the simple model room which has dimension of 3m×3m×3m. Formaldehyde which is generated from floor material was assumed as a target contaminant, and flow field, sensible/latent heat and contaminant transfer analysis in indoor space were conducted by using CFD simulation coupled with CSP. In this analysis, thermal comfort was evaluated by thermoregulatory analysis, and respiratory exposure risks represented by adsorption flux/concentration at airway wall surface were estimated by PBPK-CFD hybrid analysis. These Analysis results concerning IAQ and thermal comfort will be fed back to the HVAC control and could be used to find a suitable ventilation rate and energy requirement for air conditioning system.

Keywords: CFD simulation, computer simulated person, HVAC control, indoor environmental quality

Procedia PDF Downloads 361
676 Effect of Phthalates on Male Infertility: Myth or Truth?

Authors: Rashmi Tomar, A. Srinivasan, Nayan K. Mohanty, Arun K. Jain

Abstract:

Phthalates have been used as additives in industrial products since the 1930s, and are universally considered to be ubiquitous environmental contaminants. The general population is exposed to phthalates through consumer products, as well as diet and medical treatments. Animal studies showing the existence of an association between some phthalates and testicular toxicity have generated public and scientific concern about the potential adverse effects of environmental changes on male reproductive health. Unprecedented declines in fertility rates and semen quality have been reported during the last half of the 20th century in developed countries and increasing interest exists on the potential relationship between exposure to environmental contaminants, including phthalates, and human male reproductive health Studies. Phthalates may be associated with altered endocrine function and adverse effects on male reproductive development and function, but human studies are limited. The aim of the present study was detection of phthalate compounds, estimation of their metabolites in infertile & fertile male. Blood and urine samples were collected from 150 infertile patients & 75 fertile volunteers recruited through Department of Urology, Safdarjung Hospital, New Delhi. Blood have been collected in separate glass tubes from the antecubital vein of the patients, serum have been separate and estimate the phthalate level in serum samples by Gas Chromatography / Mass Spectrometry using NIOSH / OSHA detailed protocol. Urine of Infertile & Fertile Subjects was collected & extracted using solid phase extraction method, analysis by HPLC. In conclusion, to the best of our knowledge the present study based on human is first to show the presence of phthalate in human serum samples and their metabolites in urine samples. Significant differences were observed between several phthalates in infertile and fertile healthy individuals.

Keywords: Gas Chromatography, HPLC, male infertility, phthalates, serum, toxicity, urine

Procedia PDF Downloads 363
675 Dynamic Externalities and Regional Productivity Growth: Evidence from Manufacturing Industries of India and China

Authors: Veerpal Kaur

Abstract:

The present paper aims at investigating the role of dynamic externalities of agglomeration in the regional productivity growth of manufacturing sector in India and China. Taking 2-digit level manufacturing sector data of states and provinces of India and China respectively for the period of 1998-99 to 2011-12, this paper examines the effect of dynamic externalities namely – Marshall-Arrow-Romer (MAR) specialization externalities, Jacobs’s diversity externalities, and Porter’s competition externalities on regional total factor productivity growth (TFPG) of manufacturing sector in both economies. Regressions have been carried on pooled data for all 2-digit manufacturing industries for India and China separately. The estimation of Panel has been based on a fixed effect by sector model. The results of econometric exercise show that labour-intensive industries in Indian regional manufacturing benefit from diversity externalities and capital intensive industries gain more from specialization in terms of TFPG. In China, diversity externalities and competition externalities hold better prospectus for regional TFPG in both labour intensive and capital intensive industries. But if we look at results for coastal and non-coastal region separately, specialization tends to assert a positive effect on TFPG in coastal regions whereas it has a negative effect on TFPG of coastal regions. Competition externalities put a negative effect on TFPG of non-coastal regions whereas it has a positive effect on TFPG of coastal regions. Diversity externalities made a positive contribution to TFPG in both coastal and non-coastal regions. So the results of the study postulate that the importance of dynamic externalities should not be examined by pooling all industries and all regions together. This could hold differential implications for region specific and industry-specific policy formulation. Other important variables explaining regional level TFPG in both India and China have been the availability of infrastructure, level of competitiveness, foreign direct investment, exports and geographical location of the region (especially in China).

Keywords: China, dynamic externalities, India, manufacturing, productivity

Procedia PDF Downloads 123
674 Predicting the Turbulence Intensity, Excess Energy Available and Potential Power Generated by Building Mounted Wind Turbines over Four Major UK City

Authors: Emejeamara Francis

Abstract:

The future of potentials wind energy applications within suburban/urban areas are currently faced with various problems. These include insufficient assessment of urban wind resource, and the effectiveness of commercial gust control solutions as well as unavailability of effective and cheaper valuable tools for scoping the potentials of urban wind applications within built-up environments. In order to achieve effective assessment of the potentials of urban wind installations, an estimation of the total energy that would be available to them were effective control systems to be used, and evaluating the potential power to be generated by the wind system is required. This paper presents a methodology of predicting the power generated by a wind system operating within an urban wind resource. This method was developed by using high temporal resolution wind measurements from eight potential sites within the urban and suburban environment as inputs to a vertical axis wind turbine multiple stream tube model. A relationship between the unsteady performance coefficient obtained from the stream tube model results and turbulence intensity was demonstrated. Hence, an analytical methodology for estimating the unsteady power coefficient at a potential turbine site is proposed. This is combined with analytical models that were developed to predict the wind speed and the excess energy (EEC) available in estimating the potential power generated by wind systems at different heights within a built environment. Estimates of turbulence intensities, wind speed, EEC and turbine performance based on the current methodology allow a more complete assessment of available wind resource and potential urban wind projects. This methodology is applied to four major UK cities namely Leeds, Manchester, London and Edinburgh and the potential to map the turbine performance at different heights within a typical urban city is demonstrated.

Keywords: small-scale wind, turbine power, urban wind energy, turbulence intensity, excess energy content

Procedia PDF Downloads 277
673 Analyzing the Efficiency of Initiatives Taken against Disinformation during Election Campaigns: Case Study of Young Voters

Authors: Fatima-Zohra Ghedir

Abstract:

Social media platforms have been actively working on solutions and combined their efforts with media, policy makers, educators and researchers to protect citizens and prevent interferences in information, political discourses and elections. Facebook, for instance, deleted fake accounts, implemented fake accounts and fake content detection algorithms, partnered with news agencies to manually fact check content and changed its newsfeeds display. Twitter and Instagram regularly communicate on their efforts and notify their users of improvements and safety guidelines. More funds have been allocated to media literacy programs to empower citizens in prevision of the coming elections. This paper investigates the efficiency of these initiatives and analyzes the metrics to measure their success or failure. The objective is also to determine the segments of population more prone to fall in disinformation traps during the elections despite the measures taken over the last four years. This study will also examine the groups who were positively impacted by these measures. This paper relies on both desk and field methodologies. For this study, a survey was administered to French students aged between 17 and 29 years old. Semi-guided interviews were conducted on a similar audience. The analysis of the survey and of the interviews show that respondents were exposed to the initiatives described above and are aware of the existence of disinformation issues. However, they do not understand what disinformation really entails or means. For instance, for most of them, disinformation is synonymous of the opposite point of view without taking into account the truthfulness of the content. Besides, they still consume and believe the information shared by their friends and family, with little questioning about the ways their closed ones get informed.

Keywords: democratic elections, disinformation, foreign interference, social media, success metrics

Procedia PDF Downloads 110
672 Prevalence of Elder Abuse and Effects of Social Factors on It

Authors: Ezat Vahidian, Babak Eshrati

Abstract:

Introduction: Elder abuse, a very complex issue with diverse definitions and names, has been very slow to capture the public eye and public policy since it is manifested at many levels. It requires the involvement of different types of professionals. While elder abuse is not a new phenomenon, the speed of population ageing world-wide is likely to lead to an increase in its incidence and prevalence. Elder abuse has devastating consequences for older persons such as poor quality of life, psychological distress, and loss of property and security. It is also associated with increased mortality and morbidity. Elder abuse is a problem that manifests itself in both rich and poor countries and at all levels of society. Purpose: The purpose of this study is to determine the prevalence of elder abuse and effects of social factor on it in Markazi Province. Materials and methods: The society of the study was all of the elders in Markazi Province that were available by geographical address in the table of rural and urban household societies. The study was cross sectional and multi phases in sampling the first one was classification according rural and urban area and the second one was cluster sampling with equal cluster. Estimation of samples were 472 persons and increased by design effect to 1110 persons. Collection data was done by questionnaire and analyzed by SPSS and chi 2 exam. Results: This study showed 70 persons were abused (42/8% male and 57/2% female) mean of ages was 74/7 years. 64% were marred and 31% were widows. There were not any significant meaningful association between elder abuse and area of living (pv=0.299),occupation (p.v=0.104), education (pv=0.358) and age (P.value=0.104) there were significant meaningful association between physical impairment (pv=0.08), and movement impairment (P.value=0.008). Conclusion: Results verify that maltreatment occurred in the aged persons. Analysis of data indicated that elder abuse exist in every socioeconomic group with any context of education in urban area and rural area and in men and women. Prevalence of elder abuse was 6.3% (70 persons) that verify the data of developed countries with limited sample.

Keywords: elder abuse, education, occupation, area of living

Procedia PDF Downloads 403
671 Quality Analysis of Vegetables Through Image Processing

Authors: Abdul Khalique Baloch, Ali Okatan

Abstract:

The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.

Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria

Procedia PDF Downloads 70
670 Developing a Green Strategic Management Model with regarding HSE-MS

Authors: Amin Padash, Gholam Reza Nabi Bid Hendi, Hassan Hoveidi

Abstract:

Purpose: The aim of this research is developing a model for green management based on Health, Safety and Environmental Management System. An HSE-MS can be a powerful tool for organizations to both improve their environmental, health and safety performance, and enhance their business efficiency to green management. Model: The model is developed in this study can be used for industries as guidelines for implementing green management issue by considering Health, Safety and Environmental Management System. Case Study: The Pars Special Economic / Energy Zone Organization on behalf of Iran’s Petroleum Ministry and National Iranian Oil Company (NIOC) manages and develops the South and North oil and gas fields in the region. Methodology: This research according to objective is applied and based on implementing is descriptive and also prescription. We used technique MCDM (Multiple Criteria Decision-Making) for determining the priorities of the factors. Based on process approach the model consists of the following steps and components: first factors involved in green issues are determined. Based on them a framework is considered. Then with using MCDM (Multiple Criteria Decision-Making) algorithms (TOPSIS) the priority of basic variables are determined. The authors believe that the proposed model and results of this research can aid industries managers to implement green subjects according to Health, Safety and Environmental Management System in a more efficient and effective manner. Finding and conclusion: Basic factors involved in green issues and their weights can be the main finding. Model and relation between factors are the other finding of this research. The case is considered Petrochemical Company for promoting the system of ecological industry thinking.

Keywords: Fuzzy-AHP method , green management, health, safety and environmental management system, MCDM technique, TOPSIS

Procedia PDF Downloads 411
669 Efficient Chiller Plant Control Using Modern Reinforcement Learning

Authors: Jingwei Du

Abstract:

The need of optimizing air conditioning systems for existing buildings calls for control methods designed with energy-efficiency as a primary goal. The majority of current control methods boil down to two categories: empirical and model-based. To be effective, the former heavily relies on engineering expertise and the latter requires extensive historical data. Reinforcement Learning (RL), on the other hand, is a model-free approach that explores the environment to obtain an optimal control strategy often referred to as “policy”. This research adopts Proximal Policy Optimization (PPO) to improve chiller plant control, and enable the RL agent to collaborate with experienced engineers. It exploits the fact that while the industry lacks historical data, abundant operational data is available and allows the agent to learn and evolve safely under human supervision. Thanks to the development of language models, renewed interest in RL has led to modern, online, policy-based RL algorithms such as the PPO. This research took inspiration from “alignment”, a process that utilizes human feedback to finetune the pretrained model in case of unsafe content. The methodology can be summarized into three steps. First, an initial policy model is generated based on minimal prior knowledge. Next, the prepared PPO agent is deployed so feedback from both critic model and human experts can be collected for future finetuning. Finally, the agent learns and adapts itself to the specific chiller plant, updates the policy model and is ready for the next iteration. Besides the proposed approach, this study also used traditional RL methods to optimize the same simulated chiller plants for comparison, and it turns out that the proposed method is safe and effective at the same time and needs less to no historical data to start up.

Keywords: chiller plant, control methods, energy efficiency, proximal policy optimization, reinforcement learning

Procedia PDF Downloads 30
668 Estimation of Physico-Mechanical Properties of Tuffs (Turkey) from Indirect Methods

Authors: Mustafa Gok, Sair Kahraman, Mustafa Fener

Abstract:

In rock engineering applications, determining uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), and basic index properties such as density, porosity, and water absorption is crucial for the design of both underground and surface structures. However, obtaining reliable samples for direct testing, especially from rocks that weather quickly and have low strength, is often challenging. In such cases, indirect methods provide a practical alternative to estimate the physical and mechanical properties of these rocks. In this study, tuff samples collected from the Cappadocia region (Nevşehir) in Turkey were subjected to indirect testing methods. Over 100 tests were conducted, using needle penetrometer index (NPI), point load strength index (PLI), and disc shear index (BPI) to estimate the uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), density, and water absorption index of the tuffs. The relationships between the results of these indirect tests and the target physical properties were evaluated using simple and multiple regression analyses. The findings of this research reveal strong correlations between the indirect methods and the mechanical properties of the tuffs. Both uniaxial compressive strength and Brazilian tensile strength could be accurately predicted using NPI, PLI, and BPI values. The regression models developed in this study allow for rapid, cost-effective assessments of tuff strength in cases where direct testing is impractical. These results are particularly valuable for geological engineering applications, where time and resource constraints exist. This study highlights the significance of using indirect methods as reliable predictors of the mechanical behavior of weak rocks like tuffs. Further research is recommended to explore the application of these methods to other rock types with similar characteristics. Further research is required to compare the results with those of established direct test methods.

Keywords: brazilian tensile strength, disc shear strength, indirect methods, tuffs, uniaxial compressive strength

Procedia PDF Downloads 17
667 Evaluation of Genetic Fidelity and Phytochemical Profiling of Micropropagated Plants of Cephalantheropsis obcordata: An Endangered Medicinal Orchid

Authors: Gargi Prasad, Ashiho A. Mao, Deepu Vijayan, S. Mandal

Abstract:

The main objective of the present study was to optimize and develop an efficient protocol for in vitro propagation of a medicinally important orchid Cephalantheropsis obcordata (Lindl.) Ormerod along with genetic stability analysis of regenerated plants. This plant has been traditionally used in Chinese folk medicine and the decoction of whole plant is known to possess anticancer activity. Nodal segments used as explants were inoculated on Murashige and Skoog (MS) medium supplemented with various concentrations of isopentenyl adenine (2iP). The rooted plants were successfully acclimatized in the greenhouse with 100% survival rate. Inter-simple sequence repeats (ISSR) markers were used to assess the genetic fidelity of in vitro raised plants and the mother plant. It was revealed that monomorphic bands showing the absence of polymorphism in all in vitro raised plantlets analyzed, confirming the genetic uniformity among the regenerants. Phytochemical analysis was done to compare the antioxidant activities and HPLC fingerprinting assay of 80% aqueous ethanol extract of the leaves and stem of in vitro and in vivo grown C. obcordata. The extracts of the plants were examined for their antioxidant activities by using free radical 1, 1-diphenyl-2-picryl hydrazyl (DPPH) scavenging method, 2,2’-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) radical scavenging ability, reducing power capacity, estimation of total phenolic content, flavonoid content and flavonol content. A simplified method for the detection of ascorbic acid, phenolic acids and flavonoids content was also developed by using reversed phase high-performance liquid chromatography (HPLC). This is the first report on the micropropagation, genetic integrity study and quantitative phytochemical analysis of in vitro regenerated plants of C. obcordata.

Keywords: Cephalantheropsis obcordata, genetic fidelity, ISSR markers, HPLC

Procedia PDF Downloads 156
666 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks

Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev

Abstract:

One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.

Keywords: channel estimation, inter-cell interference, pilot contamination attacks, wireless communications

Procedia PDF Downloads 217
665 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 152
664 Modelling Insider Attacks in Public Cloud

Authors: Roman Kulikov, Svetlana Kolesnikova

Abstract:

Last decade Cloud Computing technologies have been rapidly becoming ubiquitous. Each year more and more organizations, corporations, internet services and social networks trust their business sensitive information to Public Cloud. The data storage in Public Cloud is protected by security mechanisms such as firewalls, cryptography algorithms, backups, etc.. In this way, however, only outsider attacks can be prevented, whereas virtualization tools can be easily compromised by insider. The protection of Public Cloud’s critical elements from internal intruder remains extremely challenging. A hypervisor, also called a virtual machine manager, is a program that allows multiple operating systems (OS) to share a single hardware processor in Cloud Computing. One of the hypervisor's functions is to enforce access control policies. Furthermore, it prevents guest OS from disrupting each other and from accessing each other's memory or disk space. Hypervisor is the one of the most critical and vulnerable elements in Cloud Computing infrastructure. Nevertheless, it has been poorly protected from being compromised by insider. By exploiting certain vulnerabilities, privilege escalation can be easily achieved in insider attacks on hypervisor. In this way, an internal intruder, who has compromised one process, is able to gain control of the entire virtual machine. Thereafter, the consequences of insider attacks in Public Cloud might be more catastrophic and significant to virtual tools and sensitive data than of outsider attacks. So far, almost no preventive security countermeasures have been developed. There has been little attention paid for developing models to assist risks mitigation strategies. In this paper formal model of insider attacks on hypervisor is designed. Our analysis identifies critical hypervisor`s vulnerabilities that can be easily compromised by internal intruder. Consequently, possible conditions for successful attacks implementation are uncovered. Hence, development of preventive security countermeasures can be improved on the basis of the proposed model.

Keywords: insider attack, public cloud, cloud computing, hypervisor

Procedia PDF Downloads 361
663 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association

Authors: Jacky Liu

Abstract:

This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.

Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation

Procedia PDF Downloads 102