Search results for: queueing calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1260

Search results for: queueing calculation

330 Comparing Energy Labelling of Buildings in Spain

Authors: Carolina Aparicio-Fernández, Alejandro Vilar Abad, Mar Cañada Soriano, Jose-Luis Vivancos

Abstract:

The building sector is responsible for 40% of the total energy consumption in the European Union (EU). Thus, implementation of strategies for quantifying and reducing buildings energy consumption is indispensable for reaching the EU’s carbon neutrality and energy efficiency goals. Each Member State has transposed the European Directives according to its own peculiarities: existing technical legislation, constructive solutions, climatic zones, etc. Therefore, in accordance with the Energy Performance of Buildings Directive, Member States have developed different Energy Performance Certificate schemes, using proposed energy simulation software-tool for each national or regional area. Energy Performance Certificates provide a powerful and comprehensive information to predict, analyze and improve the energy demand of new and existing buildings. Energy simulation software and databases allow a better understanding of the current constructive reality of the European building stock. However, Energy Performance Certificates still have to face several issues to consider them as a reliable and global source of information since different calculation tools are used that do not allow the connection between them. In this document, TRNSYS (TRaNsient System Simulation program) software is used to calculate the energy demand of a building, and it is compared with the energy labeling obtained with Spanish Official software-tools. We demonstrate the possibility of using not official software-tools to calculate the Energy Performance Certificate. Thus, this approach could be used throughout the EU and compare the results in all possible cases proposed by the EU Member States. To implement the simulations, an isolated single-family house with different construction solutions is considered. The results are obtained for every climatic zone of the Spanish Technical Building Code.

Keywords: energy demand, energy performance certificate EPBD, trnsys, buildings

Procedia PDF Downloads 108
329 Ground Short Circuit Contributions of a MV Distribution Line Equipped with PWMSC

Authors: Mohamed Zellagui, Heba Ahmed Hassan

Abstract:

This paper proposes a new approach for the calculation of short-circuit parameters in the presence of Pulse Width Modulated based Series Compensator (PWMSC). PWMSC is a newly Flexible Alternating Current Transmission System (FACTS) device that can modulate the impedance of a transmission line through applying a variation to the duty cycle (D) of a train of pulses with fixed frequency. This results in an improvement of the system performance as it provides virtual compensation of distribution line impedance by injecting controllable apparent reactance in series with the distribution line. This controllable reactance can operate in both capacitive and inductive modes and this makes PWMSC highly effective in controlling the power flow and increasing system stability in the system. The purpose of this work is to study the impact of fault resistance (RF) which varies between 0 to 30 Ω on the fault current calculations in case of a ground fault and a fixed fault location. The case study is for a medium voltage (MV) Algerian distribution line which is compensated by PWMSC in the 30 kV Algerian distribution power network. The analysis is based on symmetrical components method which involves the calculations of symmetrical components of currents and voltages, without and with PWMSC in both cases of maximum and minimum duty cycle value for capacitive and inductive modes. The paper presents simulation results which are verified by the theoretical analysis.

Keywords: pulse width modulated series compensator (pwmsc), duty cycle, distribution line, short-circuit calculations, ground fault, symmetrical components method

Procedia PDF Downloads 477
328 Islamic Banking Recovery Process and Its Parameters: A Practitioner’s Viewpoints in the Light of Humanising Financial Services

Authors: Muhammad Izzam Bin Mohd Khazar, Nur Adibah Binti Zainudin

Abstract:

Islamic banking as one of the financial institutions is highly required to maintain a prudent approach to ensure that any financing given is able to generate income to their respective shareholders. As the default payment of customers is probably occurred in the financing, having a prudent approach in the recovery process is a must to ensure that financing losses are within acceptable limits. The objective of this research is to provide the best practice of recovery which is anticipated to benefit both bank and customers. This study will address arising issue on the current practice of recovery process and followed by providing humanising recovery solutions in the light of the Maqasid Shariah. The study identified main issues pertaining to Islamic recovery process which can be categorized into knowledge crisis, process issues, specific treatment cases and system issues. Knowledge crisis is related to direct parties including judges, solicitors and salesperson, while the recovery process issues include the process of issuance of reminder, foreclosure and repossession of asset. Furthermore, special treatment for particular cases also should be observed since different contracts in Islamic banking products will need different treatment. Finally, issues in the system used in the recovery process are still unresolved since the existing technology is still young in this area to embraced Islamic finance requirements and nature of calculation. In order to humanize the financial services in Islamic banking recovery process, we have highlighted four main recommendation to be implemented by Islamic Financial Institutions namely; 1) early deterrent by improving the awareness, 2) improvement of the internal process, 3) reward mechanism, and 4) creative penalty to provide awareness to all stakeholders.

Keywords: humanizing financial services, Islamic Finance, Maqasid Syariah, recovery process

Procedia PDF Downloads 166
327 Study of Structural Behavior and Proton Conductivity of Inorganic Gel Paste Electrolyte at Various Phosphorous to Silicon Ratio by Multiscale Modelling

Authors: P. Haldar, P. Ghosh, S. Ghoshdastidar, K. Kargupta

Abstract:

In polymer electrolyte membrane fuel cells (PEMFC), the membrane electrode assembly (MEA) is consisting of two platinum coated carbon electrodes, sandwiched with one proton conducting phosphoric acid doped polymeric membrane. Due to low mechanical stability, flooding and fuel cell crossover, application of phosphoric acid in polymeric membrane is very critical. Phosphorous and silica based 3D inorganic gel gains the attention in the field of supercapacitors, fuel cells and metal hydrate batteries due to its thermally stable highly proton conductive behavior. Also as a large amount of water molecule and phosphoric acid can easily get trapped in Si-O-Si network cavities, it causes a prevention in the leaching out. In this study, we have performed molecular dynamics (MD) simulation and first principle calculations to understand the structural, electronics and electrochemical and morphological behavior of this inorganic gel at various P to Si ratios. We have used dipole-dipole interactions, H bonding, and van der Waals forces to study the main interactions between the molecules. A 'structure property-performance' mapping is initiated to determine optimum P to Si ratio for best proton conductivity. We have performed the MD simulations at various temperature to understand the temperature dependency on proton conductivity. The observed results will propose a model which fits well with experimental data and other literature values. We have also studied the mechanism behind proton conductivity. And finally we have proposed a structure for the gel paste with optimum P to Si ratio.

Keywords: first principle calculation, molecular dynamics simulation, phosphorous and silica based 3D inorganic gel, polymer electrolyte membrane fuel cells, proton conductivity

Procedia PDF Downloads 101
326 Structural Behavior of Precast Foamed Concrete Sandwich Panel Subjected to Vertical In-Plane Shear Loading

Authors: Y. H. Mugahed Amran, Raizal S. M. Rashid, Farzad Hejazi, Nor Azizi Safiee, A. A. Abang Ali

Abstract:

Experimental and analytical studies were accomplished to examine the structural behavior of precast foamed concrete sandwich panel (PFCSP) under vertical in-plane shear load. PFCSP full-scale specimens with total number of six were developed with varying heights to study an important parameter slenderness ratio (H/t). The production technique of PFCSP and the procedure of test setup were described. The results obtained from the experimental tests were analysed in the context of in-plane shear strength capacity, load-deflection profile, load-strain relationship, slenderness ratio, shear cracking patterns and mode of failure. Analytical study of finite element analysis was implemented and the theoretical calculations of the ultimate in-plane shear strengths using the adopted ACI318 equation for reinforced concrete wall were determined aimed at predicting the in-plane shear strength of PFCSP. The decrease in slenderness ratio from 24 to 14 showed an increase of 26.51% and 21.91% on the ultimate in-plane shear strength capacity as obtained experimentally and in FEA models, respectively. The experimental test results, FEA models data and theoretical calculation values were compared and provided a significant agreement with high degree of accuracy. Therefore, on the basis of the results obtained, PFCSP wall has the potential use as an alternative to the conventional load-bearing wall system.

Keywords: deflection curves, foamed concrete (FC), load-strain relationships, precast foamed concrete sandwich panel (PFCSP), slenderness ratio, vertical in-plane shear strength capacity

Procedia PDF Downloads 187
325 Application of Particle Image Velocimetry in the Analysis of Scale Effects in Granular Soil

Authors: Zuhair Kadhim Jahanger, S. Joseph Antony

Abstract:

The available studies in the literature which dealt with the scale effects of strip footings on different sand packing systematically still remain scarce. In this research, the variation of ultimate bearing capacity and deformation pattern of soil beneath strip footings of different widths under plane-strain condition on the surface of loose, medium-dense and dense sand have been systematically studied using experimental and noninvasive methods for measuring microscopic deformations. The presented analyses are based on model scale compression test analysed using Particle Image Velocimetry (PIV) technique. Upper bound analysis of the current study shows that the maximum vertical displacement of the sand under the ultimate load increases for an increase in the width of footing, but at a decreasing rate with relative density of sand, whereas the relative vertical displacement in the sand decreases for an increase in the width of the footing. A well agreement is observed between experimental results for different footing widths and relative densities. The experimental analyses have shown that there exists pronounced scale effect for strip surface footing. The bearing capacity factors rapidly decrease up to footing widths B=0.25 m, 0.35 m, and 0.65 m for loose, medium-dense and dense sand respectively, after that there is no significant decrease in . The deformation modes of the soil as well as the ultimate bearing capacity values have been affected by the footing widths. The obtained results could be used to improve settlement calculation of the foundation interacting with granular soil.

Keywords: DPIV, granular mechanics, scale effect, upper bound analysis

Procedia PDF Downloads 127
324 An Active Subsurface Geological Structure Pattern of Mud Volcano Phenomenon as an Environmental Impact of Petroleum Withdrawal in Sidoarjo, East Java, Indonesia

Authors: M. M. S. Prahastomi, M. Muhajir Saputra, Axel Derian

Abstract:

Lapindo mud (LUSI ) phenomenon which occurred in Sidoarjo 2006 is a national scale of the geological phenomenon. This mudflow forms a mud volcano that spreads by time is in the need of serious treatment. Some further research has been conducted either by the application method of geodesy, geophysics, and subsurface geology, but still remains a mystery to this phenomenon. Sidoarjo Physiographic regions are included in the Kendeng zone flanked by Rembang zones in northern and Solo zones in southern. In this region revealed Kabuh formation, Jombang formation, and alluvium. In general, in the northern part of the area is composed of sedimentary rocks Sidoarjo klastika, epiklastic, pyroclastics, and older alluvium of the Early Pleistocene to Resen. The study was conducted with the literature study of the stratigraphy and regional geology as well as secondary data from observations coupled gravity method (Anomaly Bouger). The aim of the study is to reveal the subsurface geology structure pattern and the changes in mass flow. Gravity anomaly data were obtained from the calculation of the value of gravity and altitude, then processed into gravity anomaly contours which reflect changes in density of each group observed gravity. The gravity data could indicate a bottom surface which deformation occur the stronger or more intense to the south. Deformation in the form of gravity impairment was associated with a decrease in future density which is indicated by the presence of gas, water and gas bursts. Sectional analysis of changes in the measured value of gravity at different times indicates a change in the value of gravity caused by the presence of subsurface subsidence. While the gravity anomaly section describes the fault zone causes the zone to be unstable.

Keywords: mud volcano, Lumpur Sidoarjo, Bouger anomaly, Indonesia

Procedia PDF Downloads 441
323 Fenton Sludge's Catalytic Ability with Synergistic Effects During Reuse for Landfill Leachate Treatment

Authors: Mohd Salim Mahtab, Izharul Haq Farooqi, Anwar Khursheed

Abstract:

Advanced oxidation processes (AOPs) based on Fenton are versatile options for treating complex wastewaters containing refractory compounds. However, the classical Fenton process (CFP) has limitations, such as high sludge production and reagent dosage, which limit its broad use and result in secondary contamination. As a result, long-term solutions are required for process intensification and the removal of these impediments. This study shows that Fenton sludge could serve as a catalyst in the Fe³⁺/Fe²⁺ reductive pathway, allowing non-regenerated sludge to be reused for complex wastewater treatment, such as landfill leachate treatment, even in the absence of Fenton's reagents. Experiments with and without pH adjustments in stages I and II demonstrated that an acidic pH is desirable. Humic compounds in leachate could improve the cycle of Fe³⁺/Fe²⁺ under optimal conditions, and the chemical oxygen demand (COD) removal efficiency was 22±2% and 62±2%% in stages I and II, respectively. Furthermore, excellent total suspended solids (TSS) removal (> 95%) and color removal (> 80%) were obtained in stage II. The processes underlying synergistic (oxidation/coagulation/adsorption) effects were addressed. The design of the experiment (DOE) is growing increasingly popular and has thus been implemented in the chemical, water, and environmental domains. The relevance of the statistical model for the desired response was validated using the explicitly stated optimal conditions. The operational factors, characteristics of reused sludge, toxicity analysis, cost calculation, and future research objectives were also discussed. Reusing non-regenerated Fenton sludge, according to the study's findings, can minimize hazardous solid toxic emissions and total treatment costs.

Keywords: advanced oxidation processes, catalysis, Fe³⁺/Fe²⁺ cycle, fenton sludge

Procedia PDF Downloads 66
322 Weight Estimation Using the K-Means Method in Steelmaking’s Overhead Cranes in Order to Reduce Swing Error

Authors: Seyedamir Makinejadsanij

Abstract:

One of the most important factors in the production of quality steel is to know the exact weight of steel in the steelmaking area. In this study, a calculation method is presented to estimate the exact weight of the melt as well as the objects transported by the overhead crane. Iran Alloy Steel Company's steelmaking area has three 90-ton cranes, which are responsible for transferring the ladles and ladle caps between 34 areas in the melt shop. Each crane is equipped with a Disomat Tersus weighing system that calculates and displays real-time weight. The moving object has a variable weight due to swinging, and the weighing system has an error of about +-5%. This means that when the object is moving by a crane, which weighs about 80 tons, the device (Disomat Tersus system) calculates about 4 tons more or 4 tons less, and this is the biggest problem in calculating a real weight. The k-means algorithm is an unsupervised clustering method that was used here. The best result was obtained by considering 3 centers. Compared to the normal average(one) or two, four, five, and six centers, the best answer is with 3 centers, which is logically due to the elimination of noise above and below the real weight. Every day, the standard weight is moved with working cranes to test and calibrate cranes. The results are shown that the accuracy is about 40 kilos per 60 tons (standard weight). As a result, with this method, the accuracy of moving weight is calculated as 99.95%. K-means is used to calculate the exact mean of objects. The stopping criterion of the algorithm is also the number of 1000 repetitions or not moving the points between the clusters. As a result of the implementation of this system, the crane operator does not stop while moving objects and continues his activity regardless of weight calculations. Also, production speed increased, and human error decreased.

Keywords: k-means, overhead crane, melt weight, weight estimation, swing problem

Procedia PDF Downloads 66
321 Outcome of Using Penpat Pinyowattanasilp Equation for Prediction of 24-Hour Uptake, First and Second Therapeutic Doses Calculation in Graves’ Disease Patient

Authors: Piyarat Parklug, Busaba Supawattanaobodee, Penpat Pinyowattanasilp

Abstract:

The radioactive iodine thyroid uptake (RAIU) has been widely used to differentiate the cause of thyrotoxicosis and treatment. Twenty-four hours RAIU is routinely used to calculate the dose of radioactive iodine (RAI) therapy; however, 2 days protocol is required. This study aims to evaluate the modification of Penpat Pinyowattanasilp equation application by the exclusion of outlier data, 3 hours RAIU less than 20% and more than 80%, to improve prediction of 24-hour uptake. The equation is predicted 24 hours RAIU (P24RAIU) = 32.5+0.702 (3 hours RAIU). Then calculating separation first and second therapeutic doses in Graves’ disease patients. Methods; This study was a retrospective study at Faculty of Medicine Vajira Hospital in Bangkok, Thailand. Inclusion were Graves’ disease patients who visited RAI clinic between January 2014-March 2019. We divided subjects into 2 groups according to first and second therapeutic doses. Results; Our study had a total of 151 patients. The study was done in 115 patients with first RAI dose and 36 patients with second RAI dose. The P24RAIU are highly correlated with actual 24-hour RAIU in first and second therapeutic doses (r = 0.913, 95% CI = 0.876 to 0.939 and r = 0.806, 95% CI = 0.649 to 0.897). Bland-Altman plot shows that mean differences between predictive and actual 24 hours RAI in the first dose and second dose were 2.14% (95%CI 0.83-3.46) and 1.37% (95%CI -1.41-4.14). The mean first actual and predictive therapeutic doses are 8.33 ± 4.93 and 7.38 ± 3.43 milliCuries (mCi) respectively. The mean second actual and predictive therapeutic doses are 6.51 ± 3.96 and 6.01 ± 3.11 mCi respectively. The predictive therapeutic doses are highly correlated with the actual dose in first and second therapeutic doses (r = 0.907, 95% CI = 0.868 to 0.935 and r = 0.953, 95% CI = 0.909 to 0.976). Bland-Altman plot shows that mean difference between predictive and actual P24RAIU in the first dose and second dose were less than 1 mCi (-0.94 and -0.5 mCi). This modification equation application is simply used in clinical practice especially patient with 3 hours RAIU in range of 20-80% in a Thai population. Before use, this equation for other population should be tested for the correlation.

Keywords: equation, Graves’disease, prediction, 24-hour uptake

Procedia PDF Downloads 119
320 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.

Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements

Procedia PDF Downloads 41
319 Rapid Assessment the Ability of Forest Vegetation in Kulonprogo to Store Carbon Using Multispectral Satellite Imagery and Vegetation Index

Authors: Ima Rahmawati, Nur Hafizul Kalam

Abstract:

Development of industrial and economic sectors in various countries very rapidly caused raising the greenhouse gas (GHG) emissions. Greenhouse gases are dominated by carbon dioxide (CO2) and methane (CH4) in the atmosphere that make the surface temperature of the earth always increase. The increasing gases caused by incomplete combustion of fossil fuels such as petroleum and coals and also high rate of deforestation. Yogyakarta Special Province which every year always become tourist destination, has a great potency in increasing of greenhouse gas emissions mainly from the incomplete combustion. One of effort to reduce the concentration of gases in the atmosphere is keeping and empowering the existing forests in the Province of Yogyakarta, especially forest in Kulonprogro is to be maintained the greenness so that it can absorb and store carbon maximally. Remote sensing technology can be used to determine the ability of forests to absorb carbon and it is connected to the density of vegetation. The purpose of this study is to determine the density of the biomass of forest vegetation and determine the ability of forests to store carbon through Photo-interpretation and Geographic Information System approach. Remote sensing imagery that used in this study is LANDSAT 8 OLI year 2015 recording. LANDSAT 8 OLI imagery has 30 meters spatial resolution for multispectral bands and it can give general overview the condition of the carbon stored from every density of existing vegetation. The method is the transformation of vegetation index combined with allometric calculation of field data then doing regression analysis. The results are model maps of density and capability level of forest vegetation in Kulonprogro, Yogyakarta in storing carbon.

Keywords: remote sensing, carbon, kulonprogo, forest vegetation, vegetation index

Procedia PDF Downloads 369
318 The Effect of Foundation on the Earth Fill Dam Settlement

Authors: Masoud Ghaemi, Mohammadjafar Hedayati, Faezeh Yousefzadeh, Hoseinali Heydarzadeh

Abstract:

Careful monitoring in the earth dams to measure deformation caused by settlement and movement has always been a concern for engineers in the field. In order to measure settlement and deformation of earth dams, usually, the precision instruments of settlement set and combined Inclinometer that is commonly referred to IS instrument will be used. In some dams, because the thickness of alluvium is high and there is no possibility of alluvium removal (technically and economically and in terms of performance), there is no possibility of placing the end of IS instrument (precision instruments of Inclinometer-settlement set) in the rock foundation. Inevitably, have to accept installing pipes in the weak and deformable alluvial foundation that leads to errors in the calculation of the actual settlement (absolute settlement) in different parts of the dam body. The purpose of this paper is to present new and refine criteria for predicting settlement and deformation in earth dams. The study is based on conditions in three dams with a deformation quite alluvial (Agh Chai, Narmashir and Gilan-e Gharb) to provide settlement criteria affected by the alluvial foundation. To achieve this goal, the settlement of dams was simulated by using the finite difference method with FLAC3D software, and then the modeling results were compared with the reading IS instrument. In the end, the caliber of the model and validate the results, by using regression analysis techniques and scrutinized modeling parameters with real situations and then by using MATLAB software and CURVE FITTING toolbox, new criteria for the settlement based on elasticity modulus, cohesion, friction angle, the density of earth dam and the alluvial foundation was obtained. The results of these studies show that, by using the new criteria measures, the amount of settlement and deformation for the dams with alluvial foundation can be corrected after instrument readings, and the error rate in reading IS instrument can be greatly reduced.

Keywords: earth-fill dam, foundation, settlement, finite difference, MATLAB, curve fitting

Procedia PDF Downloads 167
317 Understanding Inhibitory Mechanism of the Selective Inhibitors of Cdk5/p25 Complex by Molecular Modeling Studies

Authors: Amir Zeb, Shailima Rampogu, Minky Son, Ayoung Baek, Sang H. Yoon, Keun W. Lee

Abstract:

Neurotoxic insults activate calpain, which in turn produces truncated p25 from p35. p25 forms hyperactivated Cdk5/p25 complex, and thereby induces severe neuropathological aberrations including hyperphosphorylated tau, neuroinflammation, apoptosis, and neuronal death. Inhibition of Cdk5/p25 complex alleviates aberrant phosphorylation of tau to mitigate AD pathology. PHA-793887 and Roscovitine have been investigated as selective inhibitors of Cdk5/p25 with IC50 values 5nM and 160nM, respectively, but their mechanistic studies remain unknown. Herein, computational simulations have explored the binding mode and interaction mechanism of PHA-793887 and Roscovitine with Cdk5/p25. Docking results suggested that PHA-793887 and Rsocovitine have occupied the ATP-binding site of Cdk5 and obtained highest docking (GOLD) score of 66.54 and 84.03, respectively. Furthermore, molecular dynamics (MD) simulation demonstrated that PHA-793887 and Roscovitine established stable RMSD of 1.09 Å and 1.48 Å with Cdk5/p25, respectively. Profiling of polar interactions suggested that each inhibitor formed hydrogen bonds (H-bond) with catalytic residues of Cdk5 and could remain stable throughout the molecular dynamics simulation. Additionally, binding free energy calculation by molecular mechanics/Poisson–Boltzmann surface area (MM/PBSA) suggested that PHA-793887 and Roscovitine had lowest binding free energies of -150.05 kJ/mol and -113.14 kJ/mol, respectively with Cdk5/p25. Free energy decomposition demonstrated that polar energy by H-bond between the Glu81 of Cdk5 and PHA-793887 is the essential factor to make PHA-793887 highly selective towards Cdk5/p25. Overall, this study provided substantial evidences to explore mechanistic interactions of the selective inhibitors of Cdk5/p25 and could be used as fundamental considerations in the development of structure-based selective inhibitors of Cdk5/p25.

Keywords: Cdk5/p25 inhibition, molecular modeling of Cdk5/p25, PHA-793887 and roscovitine, selective inhibition of Cdk5/p25

Procedia PDF Downloads 118
316 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: linked open data, information integration, digital libraries, data mining

Procedia PDF Downloads 400
315 Modelling Patient Condition-Based Demand for Managing Hospital Inventory

Authors: Esha Saha, Pradip Kumar Ray

Abstract:

A hospital inventory comprises of a large number and great variety of items for the proper treatment and care of patients, such as pharmaceuticals, medical equipment, surgical items, etc. Improper management of these items, i.e. stockouts, may lead to delay in treatment or other fatal consequences, even death of the patient. So, generally the hospitals tend to overstock items to avoid the risk of stockout which leads to unnecessary investment of money, difficulty in storing, more expiration and wastage, etc. Thus, in such challenging environment, it is necessary for hospitals to follow an inventory policy considering the stochasticity of demand in a hospital. Statistical analysis captures the correlation of patient condition based on bed occupancy with the patient demand which changes stochastically. Due to the dependency on bed occupancy, the markov model is developed that helps to map the changes in demand of hospital inventory based on the changes in the patient condition represented by the movements of bed occupancy states (acute care state, rehabilitative state and long-care state) during the length-of-stay of patient in a hospital. An inventory policy is developed for a hospital based on the fulfillment of patient demand with the objective of minimizing the frequency and quantity of placement of orders of inventoried items. The analytical structure of the model based on probability calculation is provided to show the optimal inventory-related decisions. A case-study is illustrated in this paper for the development of hospital inventory model based on patient demand for multiple inpatient pharmaceutical items. A sensitivity analysis is conducted to investigate the impact of inventory-related parameters on the developed optimal inventory policy. Therefore, the developed model and solution approach may help the hospital managers and pharmacists in managing the hospital inventory in case of stochastic demand of inpatient pharmaceutical items.

Keywords: bed occupancy, hospital inventory, markov model, patient condition, pharmaceutical items

Procedia PDF Downloads 299
314 Direct Approach in Modeling Particle Breakage Using Discrete Element Method

Authors: Ebrahim Ghasemi Ardi, Ai Bing Yu, Run Yu Yang

Abstract:

Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.

Keywords: particle bed, breakage models, breakage kinetic, discrete element method

Procedia PDF Downloads 172
313 Larger Diameter 22 MM-PDC Cutter Greatly Improves Drilling Efficiency of PDC Bit

Authors: Fangyuan Shao, Wei Liu, Deli Gao

Abstract:

With the increasing speed of oil and gas exploration, development and production at home and abroad, the demand for drilling speed up technology is becoming more and more critical to reduce the development cost. Highly efficient and personalized PDC bit is important equipment in the bottom hole assembly (BHA). Therefore, improving the rock-breaking efficiency of PDC bits will help reduce drilling time and drilling cost. Advances in PDC bit technology have resulted in a leapfrogging improvement in the rate of penetration (ROP) of PDC bits over roller cone bits in soft to medium-hard formations. Recently, with the development of PDC technology, the diameter of the PDC tooth can be further expanded. The maximum diameter of the PDC cutter used in this paper is 22 mm. According to the theoretical calculation, under the same depth of cut (DOC), the 22mm-PDC cutter increases the exposure of the cutter, and the increase of PDC cutter diameter helps to increase the cutting area of the PDC cutter. In order to evaluate the cutting performance of the 22 mm-PDC cutter and the existing commonly used cutters, the 16 mm, 19 mm and 22 mm PDC cutter was selected put on a vertical turret lathe (VTL) in the laboratory for cutting tests under different DOCs. The DOCs were 0.5mm, 1.0 mm, 1.5 mm and 2.0 mm, 2.5 mm and 3 mm, respectively. The rock sample used in the experiment was limestone. Results of laboratory tests have shown the new 22 mm-PDC cutter technology greatly improved cutting efficiency. On the one hand, as the DOC increases, the mechanical specific energy (MSE) of all cutters decreases, which means that the cutting efficiency increases. On the other hand, under the same DOC condition, the larger the cutter diameter is, the larger the working area of the cutter is, which leads to higher the cutting efficiency. In view of the high performance of the 22 mm-PDC cutters, which was applied to carry out full-scale bit field experiments. The result shows that the bit with 22mm-PDC cutters achieves a breakthrough improvement of ROP than that with conventional 16mm and 19mm cutters in offset well drilling.

Keywords: polycrystalline diamond compact, 22 mm-PDC cutters, cutting efficiency, mechanical specific energy

Procedia PDF Downloads 176
312 Design & Development of a Static-Thrust Test-Bench for Aviation/UAV Based Piston Engines

Authors: Syed Muhammad Basit Ali, Usama Saleem, Irtiza Ali

Abstract:

Internal combustion engines have been pioneers in the aviation industry, use of piston engines for aircraft propulsion, from propeller-driven bi-planes to turbo-prop, commercial, and cargo airliners. To provide an adequate amount of thrust piston engine rotates the propeller at a specific rpm, allowing enough mass airflow. Thrust is the only forward-acting force of an aircraft that helps heavier than air bodies to fly, depending on the mathematical model and variables included in that with the correct measurement. Test-benches have been a bench-mark in the aerospace industry to analyse the results before a flight, having paramount significance in reliability and safety engineering, depending on the mathematical model and variables included in that with the correct measurement. Calculation of thrust from a piston engine also depends on environmental changes, the diameter of the propeller, and the density of air. The project would be centered on piston engines used in the aviation industry for light aircraft and UAVs. A static thrust test bench involves various units, each performing a designed purpose to monitor and display. Static thrust tests are performed on the ground, and safety concerns hold paramount importance. The execution of this study involves research, design, manufacturing, and results based on reverse engineering initiating from virtual design, analytical analysis, and simulations. The final evaluation of results gathered from various methods such as co-relation between conventional mass-spring and digital loadcell. On average, we received 17.5kg of thrust (25+ engine run-ups – around 40 hours of engine run), only 10% deviation from analytically calculated thrust –providing 90% accuracy.

Keywords: aviation, aeronautics, static thrust, test bench, aircraft maintenance

Procedia PDF Downloads 354
311 Software Development for AASHTO and Ethiopian Roads Authority Flexible Pavement Design Methods

Authors: Amare Setegn Enyew, Bikila Teklu Wodajo

Abstract:

The primary aim of flexible pavement design is to ensure the development of economical and safe road infrastructure. However, failures can still occur due to improper or erroneous structural design. In Ethiopia, the design of flexible pavements relies on doing calculations manually and selecting pavement structure from catalogue. The catalogue offers, in eight different charts, alternative structures for combinations of traffic and subgrade classes, as outlined in the Ethiopian Roads Authority (ERA) Pavement Design Manual 2001. Furthermore, design modification is allowed in accordance with the structural number principles outlined in the AASHTO 1993 Guide for Design of Pavement Structures. Nevertheless, the manual calculation and design process involves the use of nomographs, charts, tables, and formulas, which increases the likelihood of human errors and inaccuracies, and this may lead to unsafe or uneconomical road construction. To address the challenge, a software called AASHERA has been developed for AASHTO 1993 and ERA design methods, using MATLAB language. The software accurately determines the required thicknesses of flexible pavement surface, base, and subbase layers for the two methods. It also digitizes design inputs and references like nomographs, charts, default values, and tables. Moreover, the software allows easier comparison of the two design methods in terms of results and cost of construction. AASHERA's accuracy has been confirmed through comparisons with designs from handbooks and manuals. The software can aid in reducing human errors, inaccuracies, and time consumption as compared to the conventional manual design methods employed in Ethiopia. AASHERA, with its validated accuracy, proves to be an indispensable tool for flexible pavement structure designers.

Keywords: flexible pavement design, AASHTO 1993, ERA, MATLAB, AASHERA

Procedia PDF Downloads 38
310 The Effects of Impact Forces and Kinematics of Two Different Stance Position at Straight Punch Techniques in Boxing

Authors: Bergun Meric Bingul, Cigdem Bulgan, Ozlem Tore, Mensure Aydin, Erdal Bal

Abstract:

The aim of the study was to compare the effects of impact forces and some kinematic parameters with two different straight punch stance positions in boxing. 9 elite boxing athletes from the Turkish National Team (mean age± SD 19.33±2.11 years, mean height 174.22±3.79 cm, mean weight 66.0±6.62 kg) participated in this study as voluntarily. Boxing athletes performed one trial in straight punch technique for each two different stance positions (orthodox and southpaw stances) at sandbag. The trials were recorded at a frequency of 120Hz using eight synchronized high-speed cameras (Oqus 7+), which were placed, approximately at right- angles to one another. The three-dimensional motion analysis was performed with a Motion Capture System (Qualisys, Sweden). Data was transferred to Windows-based data acquisition software, which was QTM (Qualisys Track Manager). 11 segment models were used for determination of the kinematic variables (Calf, leg, punch, upperarm, lowerarm, trunk). Also, the sandbag was markered for calculation of the impact forces. Wand calibration method (with T stick) was used for field calibration. The mean velocity and acceleration of the punch; mean acceleration of the sandbag and angles of the trunk, shoulder, hip and knee were calculated. Stance differences’ data were compared with Wilcoxon test for using SPSS 20.0 program. According to the results, there were statistically significant differences found in trunk angle on the sagittal plane (yz) (p<0.05). There was a significant difference also found in sandbag acceleration and impact forces between stance positions (p < 0.05). Boxing athletes achieved more impact forces and accelerations in orthodox stance position. It is recommended that to use an orthodox stance instead of southpaw stance in straight punch technique especially for creating more impact forces.

Keywords: boxing, impact force, kinematics, straight punch, orthodox, southpaw

Procedia PDF Downloads 294
309 Computational Fluid Dynamics Model of Various Types of Rocket Engine Nozzles

Authors: Konrad Pietrykowski, Michal Bialy, Pawel Karpinski, Radoslaw Maczka

Abstract:

The nozzle is an element of the rocket engine in which the conversion of the potential energy of gases generated during combustion into the kinetic energy of the gas stream takes place. The design parameters of the nozzle have a decisive influence on the ballistic characteristics of the engine. Designing a nozzle assembly is, therefore, one of the most responsible stages in developing a rocket engine design. The paper presents the results of the simulation of three types of rocket propulsion nozzles. Calculations were made using CFD (Computational Fluid Dynamics) in ANSYS Fluent software. The next types of nozzles differ in shape. The analysis was made of a conical nozzle, a bell type nozzle with a conical supersonic part and a bell type nozzle. Calculation results are presented in the form of pressure, velocity and kinetic energy distributions of turbulence in the longitudinal section. The courses of these values along the nozzles are also presented. The results show that the cone nozzle generates strong turbulence in the critical section. Which negatively affect the flow of the working medium. In the case of a bell nozzle, the transformation of the wall caused the elimination of flow disturbances in the critical section. This reduces the probability of waves forming before or after the trailing edge. The most sophisticated construction is the bell type nozzle. It allows you to maximize performance without adding extra weight. The bell type nozzle can be used as a starter and auxiliary engine nozzle due to its advantages. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).

Keywords: computational fluid dynamics, nozzle, rocket engine, supersonic flow

Procedia PDF Downloads 136
308 Analysis of Thermal Effect on Functionally Graded Micro-Beam via Mixed Finite Element Method

Authors: Cagri Mollamahmutoglu, Ali Mercan, Aykut Levent

Abstract:

Studies concerning the microstructures are becoming more important as the utilization of various micro-electro mechanical systems (MEMS) are increasing. Thus in recent years, thermal buckling and vibration analysis of microstructures have been subject to many investigations that are utilizing different numerical methods. In this study, thermal effects on mechanical response of a functionally graded (FG) Timoshenko micro-beam are presented in the framework of a mixed finite element formulation. Size effects are taken into consideration via modified couple stress theory. The mixed formulation is based on a function which in turn is derived via Gateaux Differential scientifically. After the resolution of all field equations of the beam, a potential operator is carefully constructed. Then this operator is used for the manufacturing of the functional. Usual procedures of finite element approximation are utilized for the derivation of the mixed finite element equations once the potential is obtained. Resulting finite element formulation allows usage of C₀ type simple linear shape functions and avoids shear-locking phenomena, which is a common shortcoming of the displacement-based formulations of moderately thick beams. The developed numerical scheme is used to obtain the effects of thermal loads on the static bending, free vibration and buckling of FG Timoshenko micro-beams for different power-law parameters, aspect ratios and boundary conditions. The versatility of the mixed formulation is presented over other numerical methods such as generalized differential quadrature method (GDQM). Another attractive property of the formulation is that it allows direct calculation of the contribution of micro effects on the overall mechanical response.

Keywords: micro-beam, functionally graded materials, thermal effect, mixed finite element method

Procedia PDF Downloads 107
307 The Impact of Cognitive Load on Deceit Detection and Memory Recall in Children’s Interviews: A Meta-Analysis

Authors: Sevilay Çankaya

Abstract:

The detection of deception in children’s interviews is essential for statement veracity. The widely used method for deception detection is building cognitive load, which is the logic of the cognitive interview (CI), and its effectiveness for adults is approved. This meta-analysis delves into the effectiveness of inducing cognitive load as a means of enhancing veracity detection during interviews with children. Additionally, the effectiveness of cognitive load on children's total number of events recalled is assessed as a second part of the analysis. The current meta-analysis includes ten effect sizes from search using databases. For the effect size calculation, Hedge’s g was used with a random effect model by using CMA version 2. Heterogeneity analysis was conducted to detect potential moderators. The overall result indicated that cognitive load had no significant effect on veracity outcomes (g =0.052, 95% CI [-.006,1.25]). However, a high level of heterogeneity was found (I² = 92%). Age, participants’ characteristics, interview setting, and characteristics of the interviewer were coded as possible moderators to explain variance. Age was significant moderator (β = .021; p = .03, R2 = 75%) but the analysis did not reveal statistically significant effects for other potential moderators: participants’ characteristics (Q = 0.106, df = 1, p = .744), interview setting (Q = 2.04, df = 1, p = .154), and characteristics of interviewer (Q = 2.96, df = 1, p = .086). For the second outcome, the total number of events recalled, the overall effect was significant (g =4.121, 95% CI [2.256,5.985]). The cognitive load was effective in total recalled events when interviewing with children. All in all, while age plays a crucial role in determining the impact of cognitive load on veracity, the surrounding context, interviewer attributes, and inherent participant traits may not significantly alter the relationship. These findings throw light on the need for more focused, age-specific methods when using cognitive load measures. It may be possible to improve the precision and dependability of deceit detection in children's interviews with the help of more studies in this field.

Keywords: deceit detection, cognitive load, memory recall, children interviews, meta-analysis

Procedia PDF Downloads 35
306 Calculation of Electronic Structures of Nickel in Interaction with Hydrogen by Density Functional Theoretical (DFT) Method

Authors: Choukri Lekbir, Mira Mokhtari

Abstract:

Hydrogen-Materials interaction and mechanisms can be modeled at nano scale by quantum methods. In this work, the effect of hydrogen on the electronic properties of a cluster material model «nickel» has been studied by using of density functional theoretical (DFT) method. Two types of clusters are optimized: Nickel and hydrogen-nickel system. In the case of nickel clusters (n = 1-6) without presence of hydrogen, three types of electronic structures (neutral, cationic and anionic), have been optimized according to three basis sets calculations (B3LYP/LANL2DZ, PW91PW91/DGDZVP2, PBE/DGDZVP2). The comparison of binding energies and bond lengths of the three structures of nickel clusters (neutral, cationic and anionic) obtained by those basis sets, shows that the results of neutral and anionic nickel clusters are in good agreement with the experimental results. In the case of neutral and anionic nickel clusters, comparing energies and bond lengths obtained by the three bases, shows that the basis set PBE/DGDZVP2 is most suitable to experimental results. In the case of anionic nickel clusters (n = 1-6) with presence of hydrogen, the optimization of the hydrogen-nickel (anionic) structures by using of the basis set PBE/DGDZVP2, shows that the binding energies and bond lengths increase compared to those obtained in the case of anionic nickel clusters without the presence of hydrogen, that reveals the armor effect exerted by hydrogen on the electronic structure of nickel, which due to the storing of hydrogen energy within nickel clusters structures. The comparison between the bond lengths for both clusters shows the expansion effect of clusters geometry which due to hydrogen presence.

Keywords: binding energies, bond lengths, density functional theoretical, geometry optimization, hydrogen energy, nickel cluster

Procedia PDF Downloads 393
305 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 103
304 Mechanical Behavior of Laminated Glass Cylindrical Shell with Hinged Free Boundary Conditions

Authors: Ebru Dural, M. Zulfu Asık

Abstract:

Laminated glass is a kind of safety glass, which is made by 'sandwiching' two glass sheets and a polyvinyl butyral (PVB) interlayer in between them. When the glass is broken, the interlayer in between the glass sheets can stick them together. Because of this property, the hazards of sharp projectiles during natural and man-made disasters reduces. They can be widely applied in building, architecture, automotive, transport industries. Laminated glass can easily undergo large displacements even under their own weight. In order to explain their true behavior, they should be analyzed by using large deflection theory to represent nonlinear behavior. In this study, a nonlinear mathematical model is developed for the analysis of laminated glass cylindrical shell which is free in radial directions and restrained in axial directions. The results will be verified by using the results of the experiment, carried out on laminated glass cylindrical shells. The behavior of laminated composite cylindrical shell can be represented by five partial differential equations. Four of the five equations are used to represent axial displacements and radial displacements and the fifth one for the transverse deflection of the unit. Governing partial differential equations are derived by employing variational principles and minimum potential energy concept. Finite difference method is employed to solve the coupled differential equations. First, they are converted into a system of matrix equations and then iterative procedure is employed. Iterative procedure is necessary since equations are coupled. Problems occurred in getting convergent sequence generated by the employed procedure are overcome by employing variable underrelaxation factor. The procedure developed to solve the differential equations provides not only less storage but also less calculation time, which is a substantial advantage in computational mechanics problems.

Keywords: laminated glass, mathematical model, nonlinear behavior, PVB

Procedia PDF Downloads 292
303 High-Resolution Flood Hazard Mapping Using Two-Dimensional Hydrodynamic Model Anuga: Case Study of Jakarta, Indonesia

Authors: Hengki Eko Putra, Dennish Ari Putro, Tri Wahyu Hadi, Edi Riawan, Junnaedhi Dewa Gede, Aditia Rojali, Fariza Dian Prasetyo, Yudhistira Satya Pribadi, Dita Fatria Andarini, Mila Khaerunisa, Raditya Hanung Prakoswa

Abstract:

Catastrophe risk management can only be done if we are able to calculate the exposed risks. Jakarta is an important city economically, socially, and politically and in the same time exposed to severe floods. On the other hand, flood risk calculation is still very limited in the area. This study has calculated the risk of flooding for Jakarta using 2-Dimensional Model ANUGA. 2-Dimensional model ANUGA and 1-Dimensional Model HEC-RAS are used to calculate the risk of flooding from 13 major rivers in Jakarta. ANUGA can simulate physical and dynamical processes between the streamflow against river geometry and land cover to produce a 1-meter resolution inundation map. The value of streamflow as an input for the model obtained from hydrological analysis on rainfall data using hydrologic model HEC-HMS. The probabilistic streamflow derived from probabilistic rainfall using statistical distribution Log-Pearson III, Normal and Gumbel, through compatibility test using Chi Square and Smirnov-Kolmogorov. Flood event on 2007 is used as a comparison to evaluate the accuracy of model output. Property damage estimations were calculated based on flood depth for 1, 5, 10, 25, 50, and 100 years return period against housing value data from the BPS-Statistics Indonesia, Centre for Research and Development of Housing and Settlements, Ministry of Public Work Indonesia. The vulnerability factor was derived from flood insurance claim. Jakarta's flood loss estimation for the return period of 1, 5, 10, 25, 50, and 100 years, respectively are Rp 1.30 t; Rp 16.18 t; Rp 16.85 t; Rp 21.21 t; Rp 24.32 t; and Rp 24.67 t of the total value of building Rp 434.43 t.

Keywords: 2D hydrodynamic model, ANUGA, flood, flood modeling

Procedia PDF Downloads 245
302 QSAR Modeling of Germination Activity of a Series of 5-(4-Substituent-Phenoxy)-3-Methylfuran-2(5H)-One Derivatives with Potential of Strigolactone Mimics toward Striga hermonthica

Authors: Strahinja Kovačević, Sanja Podunavac-Kuzmanović, Lidija Jevrić, Cristina Prandi, Piermichele Kobauri

Abstract:

The present study is based on molecular modeling of a series of twelve 5-(4-substituent-phenoxy)-3-methylfuran-2(5H)-one derivatives which have potential of strigolactones mimics toward Striga hermonthica. The first step of the analysis included the calculation of molecular descriptors which numerically describe the structures of the analyzed compounds. The descriptors ALOGP (lipophilicity), AClogS (water solubility) and BBB (blood-brain barrier penetration), served as the input variables in multiple linear regression (MLR) modeling of germination activity toward S. hermonthica. Two MLR models were obtained. The first MLR model contains ALOGP and AClogS descriptors, while the second one is based on these two descriptors plus BBB descriptor. Despite the braking Topliss-Costello rule in the second MLR model, it has much better statistical and cross-validation characteristics than the first one. The ALOGP and AClogS descriptors are often very suitable predictors of the biological activity of many compounds. They are very important descriptors of the biological behavior and availability of a compound in any biological system (i.e. the ability to pass through the cell membranes). BBB descriptor defines the ability of a molecule to pass through the blood-brain barrier. Besides the lipophilicity of a compound, this descriptor carries the information of the molecular bulkiness (its value strongly depends on molecular bulkiness). According to the obtained results of MLR modeling, these three descriptors are considered as very good predictors of germination activity of the analyzed compounds toward S. hermonthica seeds. This article is based upon work from COST Action (FA1206), supported by COST (European Cooperation in Science and Technology).

Keywords: chemometrics, germination activity, molecular modeling, QSAR analysis, strigolactones

Procedia PDF Downloads 261
301 The Future of Insurance: P2P Innovation versus Traditional Business Model

Authors: Ivan Sosa Gomez

Abstract:

Digitalization has impacted the entire insurance value chain, and the growing movement towards P2P platforms and the collaborative economy is also beginning to have a significant impact. P2P insurance is defined as innovation, enabling policyholders to pool their capital, self-organize, and self-manage their own insurance. In this context, new InsurTech start-ups are emerging as peer-to-peer (P2P) providers, based on a model that differs from traditional insurance. As a result, although P2P platforms do not change the fundamental basis of insurance, they do enable potentially more efficient business models to be established in terms of ensuring the coverage of risk. It is therefore relevant to determine whether p2p innovation can have substantial effects on the future of the insurance sector. For this purpose, it is considered necessary to develop P2P innovation from a business perspective, as well as to build a comparison between a traditional model and a P2P model from an actuarial perspective. Objectives: The objectives are (1) to represent P2P innovation in the business model compared to the traditional insurance model and (2) to establish a comparison between a traditional model and a P2P model from an actuarial perspective. Methodology: The research design is defined as action research in terms of understanding and solving the problems of a collectivity linked to an environment, applying theory and best practices according to the approach. For this purpose, the study is carried out through the participatory variant, which involves the collaboration of the participants, given that in this design, participants are considered experts. For this purpose, prolonged immersion in the field is carried out as the main instrument for data collection. Finally, an actuarial model is developed relating to the calculation of premiums that allows for the establishment of projections of future scenarios and the generation of conclusions between the two models. Main Contributions: From an actuarial and business perspective, we aim to contribute by developing a comparison of the two models in the coverage of risk in order to determine whether P2P innovation can have substantial effects on the future of the insurance sector.

Keywords: Insurtech, innovation, business model, P2P, insurance

Procedia PDF Downloads 66