Search results for: multi-scale computational modelling
1048 Use of Transportation Networks to Optimize The Profit Dynamics of the Product Distribution
Authors: S. Jayasinghe, R. B. N. Dissanayake
Abstract:
Optimization modelling together with the Network models and Linear Programming techniques is a powerful tool in problem solving and decision making in real world applications. This study developed a mathematical model to optimize the net profit by minimizing the transportation cost. This model focuses the transportation among decentralized production plants to a centralized distribution centre and then the distribution among island wide agencies considering the customer satisfaction as a requirement. This company produces basically 9 types of food items with 82 different varieties and 4 types of non-food items with 34 different varieties. Among 6 production plants, 4 were located near the city of Mawanella and the other 2 were located in Galewala and Anuradhapura cities which are 80 km and 150 km away from Mawanella respectively. The warehouse located in the Mawanella was the main production plant and also the only distribution plant. This plant distributes manufactured products to 39 agencies island-wide. The average values and average amount of the goods for 6 consecutive months from May 2013 to October 2013 were collected and then average demand values were calculated. The following constraints are used as the necessary requirement to satisfy the optimum condition of the model; there was one source, 39 destinations and supply and demand for all the agencies are equal. Using transport cost for a kilometer, total transport cost was calculated. Then the model was formulated using distance and flow of the distribution. Network optimization and linear programming techniques were used to originate the model while excel solver is used in solving. Results showed that company requires total transport cost of Rs. 146, 943, 034.50 to fulfil the customers’ requirement for a month. This is very much less when compared with data without using the model. Model also proved that company can reduce their transportation cost by 6% when distributing to island-wide customers. Company generally satisfies their customers’ requirements by 85%. This satisfaction can be increased up to 97% by using this model. Therefore this model can be used by other similar companies in order to reduce the transportation cost.Keywords: mathematical model, network optimization, linear programming
Procedia PDF Downloads 3461047 Determining Factors for Opening Accounts, Customers’ Perception and Their Satisfaction Level Towards the First Security Islamic Bank of Bangladesh
Authors: Md. Akiz Uddin
Abstract:
This research attempted to identify the determining factors that extensively persuaded customers of the First Security Islamic Bank Limited (FSIBL) to open accounts and their perception and satisfaction level towards it. Initially, a theoretical model was established based on existing literature reviews. After that, a self-administered structured questionnaire was developed, and data were collected from 180 customers of the FSIBL of Bangladesh using purposive sampling technique. The collected data were later analyzed through a statistical software. Structural Equation Modelling (SEM) was used to verify the model of the study and test the hypotheses. The study particularly examined the determinants of opening accounts, customers’ perception and their satisfaction level towards the bank on several factors like the bank’s compliance with Shariah law, use of modern technology, assurance, reliability, empathy, profitability, and responsiveness. To examine the impact of religious belief on being FSIBL clients, the study also investigates non-Muslim clients’ perception about FSIBL. The study focused on FSIBL customers only from five branches of Dhaka city. The study found that the religious beliefs is the most significant factors for Muslim customers for considering FSIBL to open an account, and they are satisfied with the services, too. However, for non-Muslim customers, other benefits like E-banking, various user-friendly services are the most significant factors for choosing FSIBL. Their satisfaction level is also statistically significant. Furthermore, even if the non- Muslim customers didn’t consider religious beliefs as determinant factors for choosing FSIBL, the respondents informed that they have trust that people who believe in shariah law are more reliable to keep money with them. These findings open up the avenue for future researchers to conduct more study in this area through employing a larger sample size and more branches and extending the current model by incorporating new variables. The study will be an important addition to the potentials of Islamic banking system, literature of service quality and customer satisfaction level, particularly in the success of Islamic banking system in Bangladesh.Keywords: islamic banking, customers’ satisfaction, customers’ perception, shariah law
Procedia PDF Downloads 751046 Clathrate Hydrate Measurements and Thermodynamic Modelling for Refrigerants with Electrolytes Solution in the Presence of Cyclopentane
Authors: Peterson Thokozani Ngema, Paramespri Naidoo, Amir H. Mohammadi, Deresh Ramjugernath
Abstract:
Phase equilibrium data (dissociation data) for clathrate hydrate (gas hydrate) were undertaken for systems involving fluorinated refrigerants with a single and mixed electrolytes (NaCl, CaCl₂, MgCl₂, and Na₂SO₄) aqueous solution at various salt concentrations in the absence and presence of cyclopentane (CP). The ternary systems for (R410a or R507) with the water system in the presence of CP were performed in the temperature and pressures ranges of (279.8 to 294.4) K and (0.158 to 1.385) MPa, respectively. Measurements for R410a with single electrolyte {NaCl or CaCl₂} solution in the presence of CP were undertaken at salt concentrations of (0.10, 0.15 and 0.20) mass fractions in the temperature and pressure ranges of (278.4 to 293.7) K and (0.214 to1.179) MPa, respectively. The temperature and pressure conditions for R410a with Na₂SO₄ aqueous solution system were investigated at a salt concentration of 0.10 mass fraction in the range of (283.3 to 291.6) K and (0.483 to 1.373) MPa respectively. Measurements for {R410a or R507} with mixed electrolytes {NaCl, CaCl₂, MgCl₂} aqueous solution was undertaken at various salt concentrations of (0.002 to 0.15) mass fractions in the temperature and pressure ranges of (274.5 to 292.9) K and (0.149 to1.119) MPa in the absence and presence of CP, in which there is no published data related to mixed salt and a promoter. The phase equilibrium measurements were performed using a non-visual isochoric equilibrium cell that co-operates the pressure-search technique. This study is focused on obtaining equilibrium data that can be utilized to design and optimize industrial wastewater, desalination process and the development of Hydrate Electrolyte–Cubic Plus Association (HE–CPA) Equation of State. The results show an impressive improvement in the presence of promoter (CP) on hydrate formation because it increases the dissociation temperatures near ambient conditions. The results obtained were modeled using a developed HE–CPA equation of state. The model results strongly agree with the measured hydrate dissociation data.Keywords: association, desalination, electrolytes, promoter
Procedia PDF Downloads 2431045 Numerical Heat Transfer Performance of Water-Based Graphene Nanoplatelets
Authors: Ahmad Amiri, Hamed K. Arzani, S. N. Kazi, B. T. Chew
Abstract:
Since graphene nanoplatelet (GNP) is a promising material due to desirable thermal properties, this paper is related to the thermophysical and heat transfer performance of covalently functionalized GNP-based water/ethylene glycol nanofluid through an annular channel. After experimentally measuring thermophysical properties of prepared samples, a computational fluid dynamics study has been carried out to examine the heat transfer and pressure drop of well-dispersed and stabilized nanofluids. The effect of concentration of GNP and Reynolds number at constant wall temperature boundary condition under turbulent flow regime on convective heat transfer coefficient has been investigated. Based on the results, for different Reynolds numbers, the convective heat transfer coefficient of the prepared nanofluid is higher than that of the base fluid. Also, the enhancement of convective heat transfer coefficient and thermal conductivity increase with the increase of GNP concentration in base-fluid. Based on the results of this investigation, there is a significant enhancement on the heat transfer rate associated with loading well-dispersed GNP in base-fluid.Keywords: nanofluid, turbulent flow, forced convection flow, graphene, annular, annulus
Procedia PDF Downloads 3541044 A Review on Higher-Order Spline Techniques for Solving Burgers Equation Using B-Spline Methods and Variation of B-Spline Techniques
Authors: Maryam Khazaei Pool, Lori Lewis
Abstract:
This is a summary of articles based on higher order B-splines methods and the variation of B-spline methods such as Quadratic B-spline Finite Elements Method, Exponential Cubic B-Spline Method, Septic B-spline Technique, Quintic B-spline Galerkin Method, and B-spline Galerkin Method based on the Quadratic B-spline Galerkin method (QBGM) and Cubic B-spline Galerkin method (CBGM). In this paper, we study the B-spline methods and variations of B-spline techniques to find a numerical solution to the Burgers’ equation. A set of fundamental definitions, including Burgers equation, spline functions, and B-spline functions, are provided. For each method, the main technique is discussed as well as the discretization and stability analysis. A summary of the numerical results is provided, and the efficiency of each method presented is discussed. A general conclusion is provided where we look at a comparison between the computational results of all the presented schemes. We describe the effectiveness and advantages of these methods.Keywords: Burgers’ equation, Septic B-spline, modified cubic B-spline differential quadrature method, exponential cubic B-spline technique, B-spline Galerkin method, quintic B-spline Galerkin method
Procedia PDF Downloads 1231043 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method
Authors: M. M. Qasaymeh, M. A. Khodeir
Abstract:
Subspace channel estimation methods have been studied widely. It depends on subspace decomposition of the covariance matrix to separate signal subspace from noise subspace. The decomposition normally is done by either Eigenvalue Decomposition (EVD) or Singular Value Decomposition (SVD) of the Auto-Correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. In this paper, the multipath channel estimation problem for a Slow Frequency Hopping (SFH) system using noise space based method is considered. An efficient method to estimate multipath the time delays basically is proposed, by applying MUltiple Signal Classification (MUSIC) algorithm which used the null space extracted by the Rank Revealing LU factorization (RRLU). The RRLU provides accurate information about the rank and the numerical null space which make it a valuable tool in numerical linear algebra. The proposed novel method decreases the computational complexity approximately to the half compared with RRQR methods keeping the same performance. Computer simulations are also included to demonstrate the effectiveness of the proposed scheme.Keywords: frequency hopping, channel model, time delay estimation, RRLU, RRQR, MUSIC, LS-ESPRIT
Procedia PDF Downloads 4081042 Estimation of Implicit Colebrook White Equation by Preferable Explicit Approximations in the Practical Turbulent Pipe Flow
Authors: Itissam Abuiziah
Abstract:
In several hydraulic systems, it is necessary to calculate the head losses which depend on the resistance flow friction factor in Darcy equation. Computing the resistance friction is based on implicit Colebrook-White equation which is considered as the standard for the friction calculation, but it needs high computational cost, therefore; several explicit approximation methods are used for solving an implicit equation to overcome this issue. It follows that the relative error is used to determine the most accurate method among the approximated used ones. Steel, cast iron and polyethylene pipe materials investigated with practical diameters ranged from 0.1m to 2.5m and velocities between 0.6m/s to 3m/s. In short, the results obtained show that the suitable method for some cases may not be accurate for other cases. For example, when using steel pipe materials, Zigrang and Silvester's method has revealed as the most precise in terms of low velocities 0.6 m/s to 1.3m/s. Comparatively, Halland method showed a less relative error with the gradual increase in velocity. Accordingly, the simulation results of this study might be employed by the hydraulic engineers, so they can take advantage to decide which is the most applicable method according to their practical pipe system expectations.Keywords: Colebrook–White, explicit equation, friction factor, hydraulic resistance, implicit equation, Reynolds numbers
Procedia PDF Downloads 1851041 Privacy Preserving in Association Rule Mining on Horizontally Partitioned Database
Authors: Manvar Sagar, Nikul Virpariya
Abstract:
The advancement in data mining techniques plays an important role in many applications. In context of privacy and security issues, the problems caused by association rule mining technique are investigated by many research scholars. It is proved that the misuse of this technique may reveal the database owner’s sensitive and private information to others. Many researchers have put their effort to preserve privacy in Association Rule Mining. Amongst the two basic approaches for privacy preserving data mining, viz. Randomization based and Cryptography based, the later provides high level of privacy but incurs higher computational as well as communication overhead. Hence, it is necessary to explore alternative techniques that improve the over-heads. In this work, we propose an efficient, collusion-resistant cryptography based approach for distributed Association Rule mining using Shamir’s secret sharing scheme. As we show from theoretical and practical analysis, our approach is provably secure and require only one time a trusted third party. We use secret sharing for privately sharing the information and code based identification scheme to add support against malicious adversaries.Keywords: Privacy, Privacy Preservation in Data Mining (PPDM), horizontally partitioned database, EMHS, MFI, shamir secret sharing
Procedia PDF Downloads 4061040 Machine Learning Approach for Mutation Testing
Authors: Michael Stewart
Abstract:
Mutation testing is a type of software testing proposed in the 1970s where program statements are deliberately changed to introduce simple errors so that test cases can be validated to determine if they can detect the errors. Test cases are executed against the mutant code to determine if one fails, detects the error and ensures the program is correct. One major issue with this type of testing was it became intensive computationally to generate and test all possible mutations for complex programs. This paper used reinforcement learning and parallel processing within the context of mutation testing for the selection of mutation operators and test cases that reduced the computational cost of testing and improved test suite effectiveness. Experiments were conducted using sample programs to determine how well the reinforcement learning-based algorithm performed with one live mutation, multiple live mutations and no live mutations. The experiments, measured by mutation score, were used to update the algorithm and improved accuracy for predictions. The performance was then evaluated on multiple processor computers. With reinforcement learning, the mutation operators utilized were reduced by 50 – 100%.Keywords: automated-testing, machine learning, mutation testing, parallel processing, reinforcement learning, software engineering, software testing
Procedia PDF Downloads 1971039 Scale Effects on the Wake Airflow of a Heavy Truck
Authors: Aude Pérard Lecomte, Georges Fokoua, Amine Mehel, Anne Tanière
Abstract:
Air quality in urban areas is deteriorated by pollution, mainly due to the constant increase of the traffic of different types of ground vehicles. In particular, particulate matter pollution with important concentrations in urban areas can cause serious health issues. Characterizing and understanding particle dynamics is therefore essential to establish recommendations to improve air quality in urban areas. To analyze the effects of turbulence on particulate pollutants dispersion, the first step is to focus on the single-phase flow structure and turbulence characteristics in the wake of a heavy truck model. To achieve this, Computational Fluid Dynamics (CFD) simulations were conducted with the aim of modeling the wake airflow of a full- and reduced-scale heavy truck. The Reynolds Average Navier-Stokes (RANS) approach with the Reynolds Stress Model (RSM)as the turbulence model closure was used. The simulations highlight the apparition of a large vortex coming from the under trailer. This vortex belongs to the recirculation region, located in the near-wake of the heavy truck. These vortical structures are expected to have a strong influence on particle dynamics that are emitted by the truck.Keywords: CDF, heavy truck, recirculation region, reduced scale
Procedia PDF Downloads 2161038 Numerical Study on Parallel Rear-Spoiler on Super Cars
Authors: Anshul Ashu
Abstract:
Computers are applied to the vehicle aerodynamics in two ways. One of two is Computational Fluid Dynamics (CFD) and other is Computer Aided Flow Visualization (CAFV). Out of two CFD is chosen because it shows the result with computer graphics. The simulation of flow field around the vehicle is one of the important CFD applications. The flow field can be solved numerically using panel methods, k-ε method, and direct simulation methods. The spoiler is the tool in vehicle aerodynamics used to minimize unfavorable aerodynamic effects around the vehicle and the parallel spoiler is set of two spoilers which are designed in such a manner that it could effectively reduce the drag. In this study, the standard k-ε model of the simplified version of Bugatti Veyron, Audi R8 and Porsche 911 are used to simulate the external flow field. Flow simulation is done for variable Reynolds number. The flow simulation consists of three different levels, first over the model without a rear spoiler, second for over model with single rear spoiler, and third over the model with parallel rear-spoiler. The second and third level has following parameter: the shape of the spoiler, the angle of attack and attachment position. A thorough analysis of simulations results has been found. And a new parallel spoiler is designed. It shows a little improvement in vehicle aerodynamics with a decrease in vehicle aerodynamic drag and lift. Hence, it leads to good fuel economy and traction force of the model.Keywords: drag, lift, flow simulation, spoiler
Procedia PDF Downloads 4981037 Passport Bros: Exploring Neocolonial Masculinity and Sex Tourism as a Response to Shifting Gender Dynamics
Authors: Kellen Sharp
Abstract:
This study explores the phenomenon of ‘Passport Bros’, a subset within the manosphere responding to perceived crises in masculinity amidst changing gender dynamics. Focusing on a computational analysis of the passport bro community, the research addresses normative beliefs, deviations from MGTOW ideology, and discussions on nationality, race, and gender. Originating from the MGTOW movement, passport bros engage in a neocolonial approach by seeking traditional, non-Western women, attributing this pursuit to dissatisfaction with modern Western women. The paper examines how hetero pessimism within MGTOW shapes the emergence of passport bros, leading to the adoption of red pill ideologies and ultimately manifesting in the form of sex tourism. Analyzing data collected from passport bro forums through computer-assisted content analysis, the study identifies key discourses such as questions and answers, money, attitudes towards Western and traditional women, and discussions about the movement itself. The findings highlight the nuanced intersection of gender, race, and global power dynamics within the passport bro community, shedding light on their motivations and impact on neocolonial legacies.Keywords: toxic online community, manosphere, gender and media, neocolonialism
Procedia PDF Downloads 731036 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments
Authors: Rahul Paul, Peter Mctaggart, Luke Skinner
Abstract:
Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry
Procedia PDF Downloads 991035 Flow Analysis of Viscous Nanofluid Due to Rotating Rigid Disk with Navier’s Slip: A Numerical Study
Authors: Khalil Ur Rehman, M. Y. Malik, Usman Ali
Abstract:
In this paper, the problem proposed by Von Karman is treated in the attendance of additional flow field effects when the liquid is spaced above the rotating rigid disk. To be more specific, a purely viscous fluid flow yield by rotating rigid disk with Navier’s condition is considered in both magnetohydrodynamic and hydrodynamic frames. The rotating flow regime is manifested with heat source/sink and chemically reactive species. Moreover, the features of thermophoresis and Brownian motion are reported by considering nanofluid model. The flow field formulation is obtained mathematically in terms of high order differential equations. The reduced system of equations is solved numerically through self-coded computational algorithm. The pertinent outcomes are discussed systematically and provided through graphical and tabular practices. A simultaneous way of study makes this attempt attractive in this sense that the article contains dual framework and validation of results with existing work confirms the execution of self-coded algorithm for fluid flow regime over a rotating rigid disk.Keywords: Navier’s condition, Newtonian fluid model, chemical reaction, heat source/sink
Procedia PDF Downloads 1701034 The Algorithm of Semi-Automatic Thai Spoonerism Words for Bi-Syllable
Authors: Nutthapat Kaewrattanapat, Wannarat Bunchongkien
Abstract:
The purposes of this research are to study and develop the algorithm of Thai spoonerism words by semi-automatic computer programs, that is to say, in part of data input, syllables are already separated and in part of spoonerism, the developed algorithm is utilized, which can establish rules and mechanisms in Thai spoonerism words for bi-syllables by utilizing analysis in elements of the syllables, namely cluster consonant, vowel, intonation mark and final consonant. From the study, it is found that bi-syllable Thai spoonerism has 1 case of spoonerism mechanism, namely transposition in value of vowel, intonation mark and consonant of both 2 syllables but keeping consonant value and cluster word (if any). From the study, the rules and mechanisms in Thai spoonerism word were applied to develop as Thai spoonerism word software, utilizing PHP program. the software was brought to conduct a performance test on software execution; it is found that the program performs bi-syllable Thai spoonerism correctly or 99% of all words used in the test and found faults on the program at 1% as the words obtained from spoonerism may not be spelling in conformity with Thai grammar and the answer in Thai spoonerism could be more than 1 answer.Keywords: algorithm, spoonerism, computational linguistics, Thai spoonerism
Procedia PDF Downloads 2341033 Estimating the Timing Interval for Malarial Indoor Residual Spraying: A Modelling Approach
Authors: Levicatus Mugenyi, Joaniter Nankabirwa, Emmanuel Arinaitwe, John Rek, Niel Hens, Moses Kamya, Grant Dorsey
Abstract:
Background: Indoor residual spraying (IRS) reduces vector densities and malaria transmission, however, the most effective spraying intervals for IRS have not been well established. We aim to estimate the optimal timing interval for IRS using a modeling approach. Methods: We use a generalized additive model to estimate the optimal timing interval for IRS using the predicted malaria incidence. The model is applied to post IRS cohort clinical data from children aged 0.5–10 years in selected households in Tororo, historically a high malaria transmission setting in Uganda. Six rounds of IRS were implemented in Tororo during the study period (3 rounds with bendiocarb: December 2014 to December 2015, and 3 rounds with actellic: June 2016 to July 2018). Results: Monthly incidence of malaria from October 2014 to February 2019 decreased from 3.25 to 0.0 per person-years in the children under 5 years, and 1.57 to 0.0 for 5-10 year-olds. The optimal time interval for IRS differed between bendiocarb and actellic and by IRS round. It was estimated to be 17 and 40 weeks after the first round of bendiocarb and actellic, respectively. After the third round of actellic, 36 weeks was estimated to be optimal. However, we could not estimate from the data the optimal time after the second and third rounds of bendiocarb and after the second round of actellic. Conclusion: We conclude that to sustain the effect of IRS in a high-medium transmission setting, the second rounds of bendiocarb need to be applied roughly 17 weeks and actellic 40 weeks after the first round, and the timing differs for subsequent rounds. The amount of rainfall did not influence the trend in malaria incidence after IRS, as well as the IRS timing intervals. Our results suggest that shorter intervals for the IRS application can be more effective compared to the current practice, which is about 24 weeks for bendiocarb and 48 weeks for actellic. However, when considering our findings, one should account for the cost and drug resistance associated with IRS. We also recommend that the timing and incidence should be monitored in the future to improve these estimates.Keywords: incidence, indoor residual spraying, generalized additive model, malaria
Procedia PDF Downloads 1201032 Auto Rickshaw Impacts with Pedestrians: A Computational Analysis of Post-Collision Kinematics and Injury Mechanics
Authors: A. J. Al-Graitti, G. A. Khalid, P. Berthelson, A. Mason-Jones, R. Prabhu, M. D. Jones
Abstract:
Motor vehicle related pedestrian road traffic collisions are a major road safety challenge, since they are a leading cause of death and serious injury worldwide, contributing to a third of the global disease burden. The auto rickshaw, which is a common form of urban transport in many developing countries, plays a major transport role, both as a vehicle for hire and for private use. The most common auto rickshaws are quite unlike ‘typical’ four-wheel motor vehicle, being typically characterised by three wheels, a non-tilting sheet-metal body or open frame construction, a canvas roof and side curtains, a small drivers’ cabin, handlebar controls and a passenger space at the rear. Given the propensity, in developing countries, for auto rickshaws to be used in mixed cityscapes, where pedestrians and vehicles share the roadway, the potential for auto rickshaw impacts with pedestrians is relatively high. Whilst auto rickshaws are used in some Western countries, their limited number and spatial separation from pedestrian walkways, as a result of city planning, has not resulted in significant accident statistics. Thus, auto rickshaws have not been subject to the vehicle impact related pedestrian crash kinematic analyses and/or injury mechanics assessment, typically associated with motor vehicle development in Western Europe, North America and Japan. This study presents a parametric analysis of auto rickshaw related pedestrian impacts by computational simulation, using a Finite Element model of an auto rickshaw and an LS-DYNA 50th percentile male Hybrid III Anthropometric Test Device (dummy). Parametric variables include auto rickshaw impact velocity, auto rickshaw impact region (front, centre or offset) and relative pedestrian impact position (front, side and rear). The output data of each impact simulation was correlated against reported injury metrics, Head Injury Criterion (front, side and rear), Neck injury Criterion (front, side and rear), Abbreviated Injury Scale and reported risk level and adds greater understanding to the issue of auto rickshaw related pedestrian injury risk. The parametric analyses suggest that pedestrians are subject to a relatively high risk of injury during impacts with an auto rickshaw at velocities of 20 km/h or greater, which during some of the impact simulations may even risk fatalities. The present study provides valuable evidence for informing a series of recommendations and guidelines for making the auto rickshaw safer during collisions with pedestrians. Whilst it is acknowledged that the present research findings are based in the field of safety engineering and may over represent injury risk, compared to “Real World” accidents, many of the simulated interactions produced injury response values significantly greater than current threshold curves and thus, justify their inclusion in the study. To reduce the injury risk level and increase the safety of the auto rickshaw, there should be a reduction in the velocity of the auto rickshaw and, or, consideration of engineering solutions, such as retro fitting injury mitigation technologies to those auto rickshaw contact regions which are the subject of the greatest risk of producing pedestrian injury.Keywords: auto rickshaw, finite element analysis, injury risk level, LS-DYNA, pedestrian impact
Procedia PDF Downloads 1931031 Clustering of Association Rules of ISIS & Al-Qaeda Based on Similarity Measures
Authors: Tamanna Goyal, Divya Bansal, Sanjeev Sofat
Abstract:
In world-threatening terrorist attacks, where early detection, distinction, and prediction are effective diagnosis techniques and for functionally accurate and precise analysis of terrorism data, there are so many data mining & statistical approaches to assure accuracy. The computational extraction of derived patterns is a non-trivial task which comprises specific domain discovery by means of sophisticated algorithm design and analysis. This paper proposes an approach for similarity extraction by obtaining the useful attributes from the available datasets of terrorist attacks and then applying feature selection technique based on the statistical impurity measures followed by clustering techniques on the basis of similarity measures. On the basis of degree of participation of attributes in the rules, the associative dependencies between the attacks are analyzed. Consequently, to compute the similarity among the discovered rules, we applied a weighted similarity measure. Finally, the rules are grouped by applying using hierarchical clustering. We have applied it to an open source dataset to determine the usability and efficiency of our technique, and a literature search is also accomplished to support the efficiency and accuracy of our results.Keywords: association rules, clustering, similarity measure, statistical approaches
Procedia PDF Downloads 3201030 MITOS-RCNN: Mitotic Figure Detection in Breast Cancer Histopathology Images Using Region Based Convolutional Neural Networks
Authors: Siddhant Rao
Abstract:
Studies estimate that there will be 266,120 new cases of invasive breast cancer and 40,920 breast cancer induced deaths in the year of 2018 alone. Despite the pervasiveness of this affliction, the current process to obtain an accurate breast cancer prognosis is tedious and time consuming. It usually requires a trained pathologist to manually examine histopathological images and identify the features that characterize various cancer severity levels. We propose MITOS-RCNN: a region based convolutional neural network (RCNN) geared for small object detection to accurately grade one of the three factors that characterize tumor belligerence described by the Nottingham Grading System: mitotic count. Other computational approaches to mitotic figure counting and detection do not demonstrate ample recall or precision to be clinically viable. Our models outperformed all previous participants in the ICPR 2012 challenge, the AMIDA 2013 challenge and the MITOS-ATYPIA-14 challenge along with recently published works. Our model achieved an F- measure score of 0.955, a 6.11% improvement in accuracy from the most accurate of the previously proposed models.Keywords: breast cancer, mitotic count, machine learning, convolutional neural networks
Procedia PDF Downloads 2231029 Algorithms Minimizing Total Tardiness
Authors: Harun Aydilek, Asiye Aydilek, Ali Allahverdi
Abstract:
The total tardiness is a widely used performance measure in the scheduling literature. This performance measure is particularly important in situations where there is a cost to complete a job beyond its due date. The cost of scheduling increases as the gap between a job's due date and its completion time increases. Such costs may also be penalty costs in contracts, loss of goodwill. This performance measure is important as the fulfillment of due dates of customers has to be taken into account while making scheduling decisions. The problem is addressed in the literature, however, it has been assumed zero setup times. Even though this assumption may be valid for some environments, it is not valid for some other scheduling environments. When setup times are treated as separate from processing times, it is possible to increase machine utilization and to reduce total tardiness. Therefore, non-zero setup times need to be considered as separate. A dominance relation is developed and several algorithms are proposed. The developed dominance relation is utilized in the proposed algorithms. Extensive computational experiments are conducted for the evaluation of the algorithms. The experiments indicated that the developed algorithms perform much better than the existing algorithms in the literature. More specifically, one of the newly proposed algorithms reduces the error of the best existing algorithm in the literature by 40 percent.Keywords: algorithm, assembly flowshop, dominance relation, total tardiness
Procedia PDF Downloads 3531028 A Model of Human Security: A Comparison of Vulnerabilities and Timespace
Authors: Anders Troedsson
Abstract:
For us humans, risks are intimately linked to human vulnerabilities - where there is vulnerability, there is potentially insecurity, and risk. Reducing vulnerability through compensatory measures means increasing security and decreasing risk. The paper suggests that a meaningful way to approach the study of risks (including threats, assaults, crisis etc.), is to understand the vulnerabilities these external phenomena evoke in humans. As is argued, the basis of risk evaluation, as well as responses, is the more or less subjective perception by the individual person, or a group of persons, exposed to the external event or phenomena in question. This will be determined primarily by the vulnerability or vulnerabilities that the external factor are perceived to evoke. In this way, risk perception is primarily an inward dynamic, rather than an outward one. Therefore, a route towards an understanding of the perception of risks, is a closer scrutiny of the vulnerabilities which they can evoke, thereby approaching an understanding of what in the paper is called the essence of risk (including threat, assault etc.), or that which a certain perceived risk means to an individual or group of individuals. As a necessary basis for gauging the wide spectrum of potential risks and their meaning, the paper proposes a model of human vulnerabilities, drawing from i.a. a long tradition of needs theory. In order to account for the subjectivity factor, which mediates between the innate vulnerabilities on the one hand, and the event or phenomenon out there on the other hand, an ensuing ontological discussion about the timespace characteristics of risk/threat/assault as perceived by humans leads to the positing of two dimensions. These two dimensions are applied on the vulnerabilities, resulting in a modelling effort featuring four realms of vulnerabilities which are related to each other and together represent a dynamic whole. In approaching the problem of risk perception, the paper thus defines the relevant realms of vulnerabilities, depicting them as a dynamic whole. With reference to a substantial body of literature and a growing international policy trend since the 1990s, this model is put in the language of human security - a concept relevant not only for international security studies and policy, but also for other academic disciplines and spheres of human endeavor.Keywords: human security, timespace, vulnerabilities, risk perception
Procedia PDF Downloads 3351027 Exploring the Applications of Neural Networks in the Adaptive Learning Environment
Authors: Baladitya Swaika, Rahul Khatry
Abstract:
Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.Keywords: computer adaptive tests, item response theory, machine learning, neural networks
Procedia PDF Downloads 1731026 In-silico Analysis of Plumbagin against Cancer Receptors
Authors: Arpita Roy, Navneeta Bharadvaja
Abstract:
Cancer is an uncontrolled growth of abnormal cells in the body. It is one of the most serious diseases on which extensive research work has been going on all over the world. Structure-based drug designing is a computational approach which helps in the identification of potential leads that can be used for the development of a drug. Plumbagin is a naphthoquinone derivative from Plumbago zeylanica roots and belongs to one of the largest and diverse groups of plant metabolites. Anticancer and antiproliferative activities of plumbagin have been observed in animal models as well as in cell cultures. Plumbagin shows inhibitory effects on multiple cancer-signaling proteins; however, the binding mode and the molecular interactions have not yet been elucidated for most of these protein targets. In this investigation, an attempt to provide structural insights into the binding mode of plumbagin against four cancer receptors using molecular docking was performed. Plumbagin showed minimal energy against targeted cancer receptors, therefore suggested its stability and potential towards different cancers. The least binding energies of plumbagin with COX-2, TACE, and CDK6 are -5.39, -4.93, -and 4.81 kcal/mol, respectively. Comparison studies of plumbagin with different receptors showed that it is a promising compound for cancer treatment. It was also found that plumbagin obeys the Lipinski’s Rule of 5 and computed ADMET properties which showed drug likeliness and improved bioavailability. Since plumbagin is from a natural source, it has reduced side effects, and these results would be useful for cancer treatment.Keywords: cancer, receptor, plumbagin, docking
Procedia PDF Downloads 1411025 Off-Body Sub-GHz Wireless Channel Characterization for Dairy Cows in Barns
Authors: Said Benaissa, David Plets, Emmeric Tanghe, Jens Trogh, Luc Martens, Leen Vandaele, Annelies Van Nuffel, Frank A. M. Tuyttens, Bart Sonck, Wout Joseph
Abstract:
The herd monitoring and managing - in particular the detection of ‘attention animals’ that require care, treatment or assistance is crucial for effective reproduction status, health, and overall well-being of dairy cows. In large sized farms, traditional methods based on direct observation or analysis of video recordings become labour-intensive and time-consuming. Thus, automatic monitoring systems using sensors have become increasingly important to continuously and accurately track the health status of dairy cows. Wireless sensor networks (WSNs) and internet-of-things (IoT) can be effectively used in health tracking of dairy cows to facilitate herd management and enhance the cow welfare. Since on-cow measuring devices are energy-constrained, a proper characterization of the off-body wireless channel between the on-cow sensor nodes and the back-end base station is required for a power-optimized deployment of these networks in barns. The aim of this study was to characterize the off-body wireless channel in indoor (barns) environment at 868 MHz using LoRa nodes. LoRa is an emerging wireless technology mainly targeted at WSNs and IoT networks. Both large scale fading (i.e., path loss) and temporal fading were investigated. The obtained path loss values as a function of the transmitter-receiver separation were well fitted by a lognormal path loss model. The path loss showed an additional increase of 4 dB when the wireless node was actually worn by the cow. The temporal fading due to movement of other cows was well described by Rician distributions with a K-factor of 8.5 dB. Based on this characterization, network planning and energy consumption optimization of the on-body wireless nodes could be performed, which enables the deployment of reliable dairy cow monitoring systems.Keywords: channel, channel modelling, cow monitoring, dairy cows, health monitoring, IoT, LoRa, off-body propagation, PLF, propagation
Procedia PDF Downloads 3171024 Atmospheric Oxidation of Carbonyls: Insight to Mechanism, Kinetic and Thermodynamic Parameters
Authors: Olumayede Emmanuel Gbenga, Adeniyi Azeez Adebayo
Abstract:
Carbonyls are the first-generation products from tropospheric degradation reactions of volatile organic compounds (VOCs). This computational study examined the mechanism of removal of carbonyls from the atmosphere via hydroxyl radical. The kinetics of the reactions were computed from the activation energy (using enthalpy (ΔH**) and Gibbs free energy (ΔG**). The minimum energy path (MEP) analysis reveals that in all the molecules, the products have more stable energy than the reactants, which implies that the forward reaction is more thermodynamically favorable. The hydrogen abstraction of the aromatic aldehyde, especially without methyl substituents, is more kinetically favorable compared with the other aldehydes in the order of aromatic (without methyl or meta methyl) > alkene (short chain) > diene > long-chain aldehydes. The activation energy is much lower for the forward reaction than the backward, indicating that the forward reactions are more kinetically stable than their backward reaction. In terms of thermodynamic stability, the aromatic compounds are found to be less favorable in comparison to the aliphatic. The study concludes that the chemistry of the carbonyl bond of the aldehyde changed significantly from the reactants to the products.Keywords: atmospheric carbonyls, oxidation, mechanism, kinetic, thermodynamic
Procedia PDF Downloads 481023 Stress Hyperglycaemia and Glycaemic Control Post Cardiac Surgery: Relaxed Targets May Be Acceptable
Authors: Nicholas Bayfield, Liam Bibo, Charley Budgeon, Robert Larbalestier, Tom Briffa
Abstract:
Introduction: Stress hyperglycaemia is common following cardiac surgery. Its optimal management is uncertain and may differ by diabetic status. This study assesses the in-hospital glycaemic management of cardiac surgery patients and associated postoperative outcomes. Methods: A retrospective cohort analysis of all patients undergoing cardiac surgery at Fiona Stanley Hospital from February 2015 to May 2019 was undertaken. Management and outcomes of hyperglycaemia following cardiac surgery were assessed. Follow-up was assessed to 1 year postoperatively. Multivariate regression modelling was utilised. Results: 1050 non-diabetic patients and 689 diabetic patients were included. In the non-diabetic cohort, patients with mild (peak blood sugar level [BSL] < 14.3), transient stress hyperglycaemia managed without insulin were not at an increased risk of wound-related morbidity (P=0.899) or mortality at 1 year (P=0.483). Insulin management was associated with wound-related readmission to hospital (P=0.004) and superficial sternal wound infection (P=0.047). Prolonged or severe stress hyperglycaemia was predictive of hospital re-admission (P=0.050) but not morbidity or mortality (P=0.546). Diabetes mellitus was an independent risk factor 1-year mortality (OR; 1.972 [1.041–3.736], P=0.037), graft harvest site wound infection (OR; 1.810 [1.134–2.889], P=0.013) and wound-related readmission (OR; 1.866 [1.076–3.236], P=0.026). In diabetics, postoperative peak BSL > 13.9mmol/L was predictive of graft harvest site infections (OR; 3.528 [1.724-7.217], P=0.001) and wound-related readmission OR; 3.462 [1.540-7.783], P=0.003) regardless of modality of management. A peak BSL of 10.0-13.9 did not increase the risk of morbidity/mortality compared to a peak BSL of < 10.0 (P=0.557). Diabetics with a peak BSL of 13.9 or less did not have significantly increased morbidity/mortality outcomes compared to non-diabetics (P=0.418). Conclusion: In non-diabetic patients, transient mild stress hyperglycaemia following cardiac surgery does not uniformly require treatment. In diabetic patients, postoperative hyperglycaemia with peak BSL exceeding 13.9mmol/L was associated with wound-related morbidity and hospital readmission following cardiac surgery.Keywords: cardiac surgery, pulmonary embolism, pulmonary embolectomy, cardiopulmonary bypass
Procedia PDF Downloads 1601022 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures
Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk
Abstract:
From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach
Procedia PDF Downloads 2001021 Effect of Piston and its Weight on the Performance of a Gun Tunnel via Computational Fluid Dynamics
Authors: A. A. Ahmadi, A. R. Pishevar, M. Nili
Abstract:
As the test gas in a gun tunnel is non-isentropically compressed and heated by a light weight piston. Here, first consideration is the optimum piston weight. Although various aspects of the influence of piston weight on gun tunnel performance have been studied, it is not possible to decide from the existing literature what piston weight is required for optimum performance in various conditions. The technique whereby the piston is rapidly brought to rest at the end of the gun tunnel barrel, and the resulted peak pressure is equal in magnitude to the final equilibrium pressure, is called the equilibrium piston technique. The equilibrium piston technique was developed to estimate the equilibrium piston mass; but this technique cannot give an appropriate estimate for the optimum piston weight. In the present work, a gun tunnel with diameter of 3 in. is described and its performance is investigated numerically to obtain the effect of piston and its weight. Numerical results in the present work are in very good agreement with experimental results. Significant influence of the existence of a piston is shown by comparing the gun tunnel results with results of a conventional shock tunnel in the same dimension and same initial condition. In gun tunnel, an increase of around 250% in running time is gained relative to shock tunnel. Also, Numerical results show that equilibrium piston technique is not a good way to estimate suitable piston weight and there will be a lighter piston which can increase running time of the gun tunnel around 60%.Keywords: gun tunnel, hypersonic flow, piston, shock tunnel
Procedia PDF Downloads 3711020 Modelling Dengue Disease With Climate Variables Using Geospatial Data For Mekong River Delta Region of Vietnam
Authors: Thi Thanh Nga Pham, Damien Philippon, Alexis Drogoul, Thi Thu Thuy Nguyen, Tien Cong Nguyen
Abstract:
Mekong River Delta region of Vietnam is recognized as one of the most vulnerable to climate change due to flooding and seawater rise and therefore an increased burden of climate change-related diseases. Changes in temperature and precipitation are likely to alter the incidence and distribution of vector-borne diseases such as dengue fever. In this region, the peak of the dengue epidemic period is around July to September during the rainy season. It is believed that climate is an important factor for dengue transmission. This study aims to enhance the capacity of dengue prediction by the relationship of dengue incidences with climate and environmental variables for Mekong River Delta of Vietnam during 2005-2015. Mathematical models for vector-host infectious disease, including larva, mosquito, and human being were used to calculate the impacts of climate to the dengue transmission with incorporating geospatial data for model input. Monthly dengue incidence data were collected at provincial level. Precipitation data were extracted from satellite observations of GSMaP (Global Satellite Mapping of Precipitation), land surface temperature and land cover data were from MODIS. The value of seasonal reproduction number was estimated to evaluate the potential, severity and persistence of dengue infection, while the final infected number was derived to check the outbreak of dengue. The result shows that the dengue infection depends on the seasonal variation of climate variables with the peak during the rainy season and predicted dengue incidence follows well with this dynamic for the whole studied region. However, the highest outbreak of 2007 dengue was not captured by the model reflecting nonlinear dependences of transmission on climate. Other possible effects will be discussed to address the limitation of the model. This suggested the need of considering of both climate variables and another variability across temporal and spatial scales.Keywords: infectious disease, dengue, geospatial data, climate
Procedia PDF Downloads 3811019 Numerical Modelling of Prestressed Geogrid Reinforced Soil System
Authors: Soukat Kumar Das
Abstract:
Rapid industrialization and increase in population has resulted in the scarcity of suitable ground conditions. It has driven the need of ground improvement by means of reinforcement with geosynthetics with the minimum possible settlement and with maximum possible safety. Prestressing the geosynthetics offers an economical yet safe method of gaining the goal. Commercially available software PLAXIS 3D has made the analysis of prestressed geosynthetics simpler with much practical simulations of the ground. Attempts have been made so far to analyse the effect of prestressing geosynthetics and the effect of interference of footing on Unreinforced (UR), Geogrid Reinforced (GR) and Prestressed Geogrid Reinforced (PGR) soil on the load bearing capacity and the settlement characteristics of prestressed geogrid reinforced soil using the numerical analysis by using the software PLAXIS 3D. The results of the numerical analysis have been validated and compared with those given in the referred paper. The results have been found to be in very good agreement with those of the actual field values with very small variation. The GR soil has been found to be improve the bearing pressure 240 % whereas the PGR soil improves it by almost 500 % for 1mm settlement. In fact, the PGR soil has enhanced the bearing pressure of the GR soil by almost 200 %. The settlement reduction has also been found to be very significant as for 100 kPa bearing pressure the settlement reduction of the PGR soil has been found to be about 88 % with respect to UR soil and it reduced to up to 67 % with respect to GR soil. The prestressing force has resulted in enhanced reinforcement mechanism, resulting in the increased bearing pressure. The deformation at the geogrid layer has been found to be 13.62 mm for GR soil whereas it decreased down to mere 3.5 mm for PGR soil which certainly ensures the effect of prestressing on the geogrid layer. The parameter Improvement factor or conventionally known as Bearing Capacity Ratio for different settlements and which depicts the improvement of the PGR with respect to UR and GR soil and the improvement of GR soil with respect to UR soil has been found to vary in the range of 1.66-2.40 in the present analysis for GR soil and was found to be vary between 3.58 and 5.12 for PGR soil with respect to UR soil. The effect of prestressing was also observed in case of two interfering square footings. The centre to centre distance between the two footings (SFD) was taken to be B, 1.5B, 2B, 2.5B and 3B where B is the width of the footing. It was found that for UR soil the improvement of the bearing pressure was up to 1.5B after which it remained almost same. But for GR soil the zone of influence rose up to 2B and for PGR it further went up to 2.5B. So the zone of interference for PGR soil has increased by 67% than Unreinforced (UR) soil and almost 25 % with respect to GR soil.Keywords: bearing, geogrid, prestressed, reinforced
Procedia PDF Downloads 401