Search results for: iterative calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1594

Search results for: iterative calculation

694 Computer-Aided Ship Design Approach for Non-Uniform Rational Basis Spline Based Ship Hull Surface Geometry

Authors: Anu S. Nair, V. Anantha Subramanian

Abstract:

This paper presents a surface development and fairing technique combining the features of a modern computer-aided design tool namely the Non-Uniform Rational Basis Spline (NURBS) with an algorithm to obtain a rapidly faired hull form. Some of the older series based designs give sectional area distribution such as in the Wageningen-Lap Series. Others such as the FORMDATA give more comprehensive offset data points. Nevertheless, this basic data still requires fairing to obtain an acceptable faired hull form. This method uses the input of sectional area distribution as an example and arrives at the faired form. Characteristic section shapes define any general ship hull form in the entrance, parallel mid-body and run regions. The method defines a minimum of control points at each section and using the Golden search method or the bisection method; the section shape converges to the one with the prescribed sectional area with a minimized error in the area fit. The section shapes combine into evolving the faired surface by NURBS and typically takes 20 iterations. The advantage of the method is that it is fast, robust and evolves the faired hull form through minimal iterations. The curvature criterion check for the hull lines shows the evolution of the smooth faired surface. The method is applicable to hull form from any parent series and the evolved form can be evaluated for hydrodynamic performance as is done in more modern design practice. The method can handle complex shape such as that of the bulbous bow. Surface patches developed fit together at their common boundaries with curvature continuity and fairness check. The development is coded in MATLAB and the example illustrates the development of the method. The most important advantage is quick time, the rapid iterative fairing of the hull form.

Keywords: computer-aided design, methodical series, NURBS, ship design

Procedia PDF Downloads 164
693 Response of Pavement under Temperature and Vehicle Coupled Loading

Authors: Yang Zhong, Mei-Jie Xu

Abstract:

To study the dynamic mechanics response of asphalt pavement under the temperature load and vehicle loading, asphalt pavement was regarded as multilayered elastic half-space system, and theory analysis was conducted by regarding dynamic modulus of asphalt mixture as the parameter. Firstly, based on the dynamic modulus test of asphalt mixture, function relationship between the dynamic modulus of representative asphalt mixture and temperature was obtained. In addition, the analytical solution for thermal stress in the single layer was derived by using Laplace integral transformation and Hankel integral transformation respectively by using thermal equations of equilibrium. The analytical solution of calculation model of thermal stress in asphalt pavement was derived by transfer matrix of thermal stress in multilayer elastic system. Finally, the variation of thermal stress in pavement structure was analyzed. The result shows that there is an obvious difference between the thermal stress based on dynamic modulus and the solution based on static modulus. Therefore, the dynamic change of parameter in asphalt mixture should be taken into consideration when the theoretical analysis is taken out.

Keywords: asphalt pavement, dynamic modulus, integral transformation, transfer matrix, thermal stress

Procedia PDF Downloads 495
692 Influence of Vegetable Oil-Based Controlled Cutting Fluid Impinging Supply System on Micro Hardness in Machining of Ti-6Al-4V

Authors: Salah Gariani, Islam Shyha, Fawad Inam, Dehong Huo

Abstract:

A controlled cutting fluid impinging supply system (CUT-LIST) was developed to deliver an accurate amount of cutting fluid into the machining zone via well-positioned coherent nozzles based on a calculation of the heat generated. The performance of the CUT-LIST was evaluated against a conventional flood cutting fluid supply system during step shoulder milling of Ti-6Al-4V using vegetable oil-based cutting fluid. In this paper, the micro-hardness of the machined surface was used as the main criterion to compare the two systems. CUT-LIST provided significant reductions in cutting fluid consumption (up to 42%). Both systems caused increased micro-hardness value at 100 µm from the machined surface, whereas a slight reduction in micro-hardness of 4.5% was measured when using CUL-LIST. It was noted that the first 50 µm is the soft sub-surface promoted by thermal softening, whereas down to 100 µm is the hard sub-surface caused by the cyclic internal work hardening and then gradually decreased until it reached the base material nominal hardness. It can be concluded that the CUT-LIST has always given lower micro-hardness values near the machined surfaces in all conditions investigated.

Keywords: impinging supply system, micro-hardness, shoulder milling, Ti-6Al-4V, vegetable oil-based cutting fluid

Procedia PDF Downloads 280
691 Numerical Analysis of the Coanda Effect on the Classical Interior Ejectors

Authors: Alexandru Dumitrache, Florin Frunzulica, Octavian Preotu

Abstract:

The flow mitigation detachment problem near solid surfaces, resulting in improved globally aerodynamic performance by exploiting the Coanda effect on surfaces, has been addressed extensively in the literature, since 1940. The research is carried on and further developed, using modern means of calculation and new experimental methods. In this paper, it is shown interest in the detailed behavior of a classical interior ejector assisted by the Coanda effect, used in propulsion systems. For numerical investigations, an implicit formulation of RANS equations for axisymmetric flow with a shear stress transport k- ω (SST model) turbulence model is used. The obtained numerical results emphasize the efficiency of the ejector, depending on the physical parameters of the flow and the geometric configuration. Furthermore, numerical investigations are carried out regarding the evolution of the Reynolds number when the jet is attached to the wall, considering three geometric configurations: sudden expansion, open cavity and sudden expansion with divergent at the inlet. Therefore, further insight into complexities involving issues such as the variety of flow structure and the related bifurcation and flow instabilities are provided. Thus, the conditions and the limits within which one can benefit from the advantages of Coanda-type flows are determined.

Keywords: Coanda effect, Coanda ejector, CFD, stationary bifurcation, sudden expansion

Procedia PDF Downloads 208
690 Mecano-Reliability Coupled of Reinforced Concrete Structure and Vulnerability Analysis: Case Study

Authors: Kernou Nassim

Abstract:

The current study presents a vulnerability and a reliability-mechanical approach that focuses on evaluating the seismic performance of reinforced concrete structures to determine the probability of failure. In this case, the performance function reflecting the non-linear behavior of the structure is modeled by a response surface to establish an analytical relationship between the random variables (strength of concrete and yield strength of steel) and mechanical responses of the structure (inter-floor displacement) obtained by the pushover results of finite element simulations. The push over-analysis is executed by software SAP2000. The results acquired prove that properly designed frames will perform well under seismic loads. It is a comparative study of the behavior of the existing structure before and after reinforcement using the pushover method. The coupling indirect mechanical reliability by response surface avoids prohibitive calculation times. Finally, the results of the proposed approach are compared with Monte Carlo Simulation. The comparative study shows that the structure is more reliable after the introduction of new shear walls.

Keywords: finite element method, surface response, reliability, reliability mechanical coupling, vulnerability

Procedia PDF Downloads 115
689 Application of Universal Distribution Factors for Real-Time Complex Power Flow Calculation

Authors: Abdullah M. Alodhaiani, Yasir A. Alturki, Mohamed A. Elkady

Abstract:

Complex power flow distribution factors, which relate line complex power flows to the bus injected complex powers, have been widely used in various power system planning and analysis studies. In particular, AC distribution factors have been used extensively in the recent power and energy pricing studies in free electricity market field. As was demonstrated in the existing literature, many of the electricity market related costing studies rely on the use of the distribution factors. These known distribution factors, whether the injection shift factors (ISF’s) or power transfer distribution factors (PTDF’s), are linear approximations of the first order sensitivities of the active power flows with respect to various variables. This paper presents a novel model for evaluating the universal distribution factors (UDF’s), which are appropriate for an extensive range of power systems analysis and free electricity market studies. These distribution factors are used for the calculations of lines complex power flows and its independent of bus power injections, they are compact matrix-form expressions with total flexibility in determining the position on the line at which line flows are measured. The proposed approach was tested on IEEE 9-Bus system. Numerical results demonstrate that the proposed approach is very accurate compared with exact method.

Keywords: distribution factors, power system, sensitivity factors, electricity market

Procedia PDF Downloads 467
688 A Lifetime-Enhancing Monitoring Node Distribution Using Minimum Spanning Tree in Mobile Ad Hoc Networks

Authors: Sungchul Ha, Hyunwoo Kim

Abstract:

In mobile ad hoc networks, all nodes in a network only have limited resources and calculation ability. Therefore communication topology which have long lifetime is good for all nodes in mobile ad hoc networks. There are a variety of researches on security problems in wireless ad hoc networks. The existing many researches try to make efficient security schemes to reduce network power consumption and enhance network lifetime. Because a new node can join the network at any time, the wireless ad hoc networks are exposed to various threats and can be destroyed by attacks. Resource consumption is absolutely necessary to secure networks, but more resource consumption can be a critical problem to network lifetime. This paper focuses on efficient monitoring node distribution to enhance network lifetime in wireless ad hoc networks. Since the wireless ad hoc networks cannot use centralized infrastructure and security systems of wired networks, a new special IDS scheme is necessary. The scheme should not only cover all nodes in a network but also enhance the network lifetime. In this paper, we propose an efficient IDS node distribution scheme using minimum spanning tree (MST) method. The simulation results show that the proposed algorithm has superior performance in comparison with existing algorithms.

Keywords: MANETs, IDS, power control, minimum spanning tree

Procedia PDF Downloads 364
687 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments

Authors: Rahul Paul, Peter Mctaggart, Luke Skinner

Abstract:

Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.

Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry

Procedia PDF Downloads 93
686 Screening Post-Menopausal Women for Osteoporosis by Complex Impedance Measurements of the Dominant Arm

Authors: Yekta Ülgen, Fırat Matur

Abstract:

Cole-Cole parameters of 40 post-menopausal women are compared with their DEXA bone mineral density measurements. Impedance characteristics of four extremities are compared; left and right extremities are statistically same, but lower extremities are statistically different than upper ones due to their different fat content. The correlation of Cole-Cole impedance parameters to bone mineral density (BMD) is observed to be higher for a dominant arm. With the post menopausal population, ANOVA tests of the dominant arm characteristic frequency, as a predictor for DEXA classified osteopenic and osteoporotic population around the lumbar spine, is statistically very significant. When used for total lumbar spine osteoporosis diagnosis, the area under the Receiver Operating Curve of the characteristic frequency is 0.875, suggesting that the Cole-Cole plot characteristic frequency could be a useful diagnostic parameter when integrated into standard screening methods for osteoporosis. Moreover, the characteristic frequency can be directly measured by monitoring frequency driven the angular behavior of the dominant arm without performing any complex calculation.

Keywords: bioimpedance spectroscopy, bone mineral density, osteoporosis, characteristic frequency, receiver operating curve

Procedia PDF Downloads 518
685 Use of Fuzzy Logic in the Corporate Reputation Assessment: Stock Market Investors’ Perspective

Authors: Tomasz L. Nawrocki, Danuta Szwajca

Abstract:

The growing importance of reputation in building enterprise value and achieving long-term competitive advantage creates the need for its measurement and evaluation for the management purposes (effective reputation and its risk management). The paper presents practical application of self-developed corporate reputation assessment model from the viewpoint of stock market investors. The model has a pioneer character and example analysis performed for selected industry is a form of specific test for this tool. In the proposed solution, three aspects - informational, financial and development, as well as social ones - were considered. It was also assumed that the individual sub-criteria will be based on public sources of information, and as the calculation apparatus, capable of obtaining synthetic final assessment, fuzzy logic will be used. The main reason for developing this model was to fulfill the gap in the scope of synthetic measure of corporate reputation that would provide higher degree of objectivity by relying on "hard" (not from surveys) and publicly available data. It should be also noted that results obtained on the basis of proposed corporate reputation assessment method give possibilities of various internal as well as inter-branch comparisons and analysis of corporate reputation impact.

Keywords: corporate reputation, fuzzy logic, fuzzy model, stock market investors

Procedia PDF Downloads 241
684 Monetary Evaluation of Dispatching Decisions in Consideration of Choice of Transport

Authors: Marcel Schneider, Nils Nießen

Abstract:

Microscopic simulation programs enable the description of the two processes of railway operation and the previous timetabling. Occupation conflicts are often solved based on defined train priorities on both process levels. These conflict resolutions produce knock-on delays for the involved trains. The sum of knock-on delays is commonly used to evaluate the quality of railway operations. It is either compared to an acceptable level-of-service or the delays are evaluated economically by linearly monetary functions. It is impossible to properly evaluate dispatching decisions without a well-founded objective function. This paper presents a new approach for evaluation of dispatching decisions. It uses models of choice of transport and considers the behaviour of the end-costumers. These models evaluate the knock-on delays in more detail than linearly monetary functions and consider other competing modes of transport. The new approach pursues the coupling of a microscopic model of railway operation with the macroscopic model of choice of transport. First it will be implemented for the railway operations process, but it can also be used for timetabling. The evaluation considers the possibility to change over to other transport modes by the end-costumers. The new approach first looks at the rail-mounted and road transport, but it can also be extended to air transport. The split of the end-costumers is described by the modal-split. The reactions by the end-costumers have an effect on the revenues of the railway undertakings. Various travel purposes has different pavement reserves and tolerances towards delays. Longer journey times affect besides revenue changes also additional costs. The costs depend either on time or track and arise from circulation of workers and vehicles. Only the variable values are summarised in the contribution margin, which is the base for the monetary evaluation of the delays. The contribution margin is calculated for different resolution decisions of the same conflict. The conflict resolution is improved until the monetary loss becomes minimised. The iterative process therefore determines an optimum conflict resolution by observing the change of the contribution margin. Furthermore, a monetary value of each dispatching decision can also be determined.

Keywords: choice of transport, knock-on delays, monetary evaluation, railway operations

Procedia PDF Downloads 322
683 Horizontal and Vertical Illuminance Correlations in a Case Study for Shaded South Facing Surfaces

Authors: S. Matour, M. Mahdavinejad, R. Fayaz

Abstract:

Daylight utilization is a key factor in achieving visual and thermal comfort, and energy savings in integrated building design. However, lack of measured data related to this topic has become a major challenge with the increasing need for integrating lighting concepts and simulations in the early stages of design procedures. The current paper deals with the values of daylight illuminance on horizontal and south facing vertical surfaces; the data are estimated using IESNA model and measured values of the horizontal and vertical illuminance, and a regression model with an acceptable linear correlation is obtained. The resultant illuminance frequency curves are useful for estimating daylight availability on south facing surfaces in Tehran. In addition, the relationship between indirect vertical illuminance and the corresponding global horizontal illuminance is analyzed. A simple parametric equation is proposed in order to predict the vertical illumination on a shaded south facing surface. The equation correlates the ratio between the vertical and horizontal illuminance to the solar altitude and is used with another relationship for prediction of the vertical illuminance. Both equations show good agreement, which allows for calculation of indirect vertical illuminance on a south facing surface at any time throughout the year.

Keywords: Tehran daylight availability, horizontal illuminance, vertical illuminance, diffuse illuminance

Procedia PDF Downloads 197
682 Inter Laboratory Comparison with Coordinate Measuring Machine and Uncertainty Analysis

Authors: Tugrul Torun, Ihsan A. Yuksel, Si̇nem On Aktan, Taha K. Vezi̇roglu

Abstract:

In the quality control processes in some industries, the usage of CMM has increased in recent years. Consequently, the CMMs play important roles in the acceptance or rejection of manufactured parts. For parts, it’s important to be able to make decisions by performing fast measurements. According to related technical drawing and its tolerances, measurement uncertainty should also be considered during assessment. Since uncertainty calculation is difficult and time-consuming, most companies ignore the uncertainty value in their routine inspection method. Although studies on measurement uncertainty have been carried out on CMM’s in recent years, there is still no applicable method for analyzing task-specific measurement uncertainty. There are some standard series for calculating measurement uncertainty (ISO-15530); it is not possible to use it in industrial measurement because it is not a practical method for standard measurement routine. In this study, the inter-laboratory comparison test has been carried out in the ROKETSAN A.Ş. with all dimensional inspection units. The reference part that we used is traceable to the national metrology institute TUBİTAK UME. Each unit has measured reference parts according to related technical drawings, and the task-specific measuring uncertainty has been calculated with related parameters. According to measurement results and uncertainty values, the En values have been calculated.

Keywords: coordinate measurement, CMM, comparison, uncertainty

Procedia PDF Downloads 204
681 Further Study of Mechanism of Contrasting Charge Transport Properties for Phenyl and Thienyl Substituent Organic Semiconductors

Authors: Yanan Zhu

Abstract:

Based on the previous work about the influence mechanism of the mobility difference of phenyl and thienyl substituent semiconductors, we have made further exploration towards to design high-performance organic thin-film transistors. The substituent groups effect plays a significant role in materials properties and device performance as well. For the theoretical study, simulation of materials property and crystal packing can supply scientific guidance for materials synthesis in experiments. This time, we have taken the computational methods to design a new material substituent with furan groups, which are the potential to be used in organic thin-film transistors and organic single-crystal transistors. The reorganization energy has been calculated and much lower than 2,6-diphenyl anthracene (DPAnt), which performs large mobility as more than 30 cm²V⁻¹s⁻¹. Moreover, the other important parameter, charge transfer integral is larger than DPAnt, which suggested the furan substituent material may get a much better charge transport data. On the whole, the mechanism investigation based on phenyl and thienyl assisted in designing novel materials with furan substituent, which is predicted to be an outperformed organic field-effect transistors.

Keywords: theoretical calculation, mechanism, mobility, organic transistors

Procedia PDF Downloads 132
680 Diagnostic Clinical Skills in Cardiology: Improving Learning and Performance with Hybrid Simulation, Scripted Histories, Wearable Technology, and Quantitative Grading – The Assimilate Excellence Study

Authors: Daly M. J, Condron C, Mulhall C, Eppich W, O'Neill J.

Abstract:

Introduction: In contemporary clinical cardiology, comprehensive and holistic bedside evaluation including accurate cardiac auscultation is in decline despite having positive effects on patients and their outcomes. Methods: Scripted histories and scoring checklists for three clinical scenarios in cardiology were co-created and refined through iterative consensus by a panel of clinical experts; these were then paired with recordings of auscultatory findings from three actual patients with known valvular heart disease. A wearable vest with embedded pressure-sensitive panel speakers was developed to transmit these recordings when examined at the standard auscultation points. RCSI medical students volunteered for a series of three formative long case examinations in cardiology (LC1 – LC3) using this hybrid simulation. Participants were randomised into two groups: Group 1 received individual teaching from an expert trainer between LC1 and LC2; Group 2 received the same intervention between LC2 and LC3. Each participant’s long case examination performance was recorded and blindly scored by two peer participants and two RCSI examiners. Results: Sixty-eight participants were included in the study (age 27.6 ± 0.1 years; 74% female) and randomised into two groups; there were no significant differences in baseline characteristics between groups. Overall, the median total faculty examiner score was 39.8% (35.8 – 44.6%) in LC1 and increased to 63.3% (56.9 – 66.4%) in LC3, with those in Group 1 showing a greater improvement in LC2 total score than that observed in Group 2 (p < .001). Using the novel checklist, intraclass correlation coefficients (ICC) were excellent between examiners in all cases: ICC .994 – .997 (p < .001); correlation between peers and examiners improved in LC2 following peer grading of LC1 performances: ICC .857 – .867 (p < .001). Conclusion: Hybrid simulation and quantitative grading improve learning, standardisation of assessment, and direct comparisons of both performance and acumen in clinical cardiology.

Keywords: cardiology, clinical skills, long case examination, hybrid simulation, checklist

Procedia PDF Downloads 104
679 Study of Aerosol Deposition and Shielding Effects on Fluorescent Imaging Quantitative Evaluation in Protective Equipment Validation

Authors: Shinhao Yang, Hsiao-Chien Huang, Chin-Hsiang Luo

Abstract:

The leakage of protective clothing is an important issue in the occupational health field. There is no quantitative method for measuring the leakage of personal protective equipment. This work aims to measure the quantitative leakage of the personal protective equipment by using the fluorochrome aerosol tracer. The fluorescent aerosols were employed as airborne particulates in a controlled chamber with ultraviolet (UV) light-detectable stickers. After an exposure-and-leakage test, the protective equipment was removed and photographed with UV-scanning to evaluate areas, color depth ratio, and aerosol deposition and shielding effects of the areas where fluorescent aerosols had adhered to the body through the protective equipment. Thus, this work built a calculation software for quantitative leakage ratio of protective clothing based on fluorescent illumination depth/aerosol concentration ratio, illumination/Fa ratio, aerosol deposition and shielding effects, and the leakage area ratio on the segmentation. The results indicated that the two-repetition total leakage rate of the X, Y, and Z type protective clothing for subject T were about 3.05, 4.21, and 3.52 (mg/m2). For five-repetition, the leakage rate of T were about 4.12, 4.52, and 5.11 (mg/m2).

Keywords: fluorochrome, deposition, shielding effects, digital image processing, leakage ratio, personal protective equipment

Procedia PDF Downloads 314
678 Fundamental Problems in the Operation of the Automotive Parts Industry Small and Medium Businesses in the Greater Bangkok and Perimeter

Authors: Thepnarintra Praphanphat

Abstract:

The purposes of this study were to: 1) investigate operation conditions of SME automotive part industry in Bangkok and vicinity and 2) to compare operation problem levels of SME automotive part industry in Bangkok and vicinity according to the sizes of the enterprises. Samples in this study included 196 entrepreneurs of SME automotive part industry in Bangkok and vicinity derived from simple random sampling and calculation from R. V. Krejcie and D. W. Morgan’s tables. Research statistics included frequency, percentage, mean, standard deviation, and T-test. The results revealed that in general the problem levels of SME automotive part industry in Bangkok and vicinity were high. When considering in details, it was found that the problem levels were high at every aspect, i.e. personal, production, export, finance, and marketing respectively. The comparison of the problem levels according to the sizes of the enterprises revealed statistically significant differences at .05. When considering on each aspect, it was found that the aspect with the statistical difference at .05 included 5 aspects, i.e. production, marketing, finance, personal, and export. The findings also showed that small enterprises faced more severe problems than those of medium enterprises.

Keywords: automotive part industry, operation problems, SME, Perimeter

Procedia PDF Downloads 377
677 Calculation of Orbital Elements for Sending Interplanetary Probes

Authors: Jorge Lus Nisperuza Toledo, Juan Pablo Rubio Ospina, Daniel Santiago Umana, Hector Alejandro Alvarez

Abstract:

This work develops and implements computational codes to calculate the optimal launch trajectories for sending a probe from the earth to different planets of the Solar system, making use of trajectories of the Hohmann and No-Hohmann type and gravitational assistance in intermediate steps. Specifically, the orbital elements, the graphs and the dynamic simulations of the trajectories for sending a probe from the Earth towards the planets Mercury, Venus, Mars, Jupiter, and Saturn are obtained. A detailed study was made of the state vectors of the position and orbital velocity of the considered planets in order to determine the optimal trajectories of the probe. For this purpose, computer codes were developed and implemented to obtain the orbital elements of the Mariner 10 (Mercury), Magellan (Venus), Mars Global Surveyor (Mars) and Voyager 1 (Jupiter and Saturn) missions, as an exercise in corroborating the algorithms. This exercise gives validity to computational codes, allowing to find the orbital elements and the simulations of trajectories of three future interplanetary missions with specific launch windows.

Keywords: gravitational assistance, Hohmann’s trajectories, interplanetary mission, orbital elements

Procedia PDF Downloads 177
676 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 107
675 Integration of Agile Philosophy and Scrum Framework to Missile System Design Processes

Authors: Misra Ayse Adsiz, Selim Selvi

Abstract:

In today's world, technology is competing with time. In order to catch up with the world's companies and adapt quickly to the changes, it is necessary to speed up the processes and keep pace with the rate of change of the technology. The missile system design processes, which are handled with classical methods, keep behind in this race. Because customer requirements are not clear, and demands are changing again and again in the design process. Therefore, in the system design process, a methodology suitable for the missile system design dynamics has been investigated and the processes used for catching up the era are examined. When commonly used design processes are analyzed, it is seen that any one of them is dynamic enough for today’s conditions. So a hybrid design process is established. After a detailed review of the existing processes, it is decided to focus on the Scrum Framework and Agile Philosophy. Scrum is a process framework. It is focused on to develop software and handling change management with rapid methods. In addition, agile philosophy is intended to respond quickly to changes. In this study, it is aimed to integrate Scrum framework and agile philosophy, which are the most appropriate ways for rapid production and change adaptation, into the missile system design process. With this approach, it is aimed that the design team, involved in the system design processes, is in communication with the customer and provide an iterative approach in change management. These methods, which are currently being used in the software industry, have been integrated with the product design process. A team is created for system design process. The roles of Scrum Team are realized with including the customer. A scrum team consists of the product owner, development team and scrum master. Scrum events, which are short, purposeful and time-limited, are organized to serve for coordination rather than long meetings. Instead of the classic system design methods used in product development studies, a missile design is made with this blended method. With the help of this design approach, it is become easier to anticipate changing customer demands, produce quick solutions to demands and combat uncertainties in the product development process. With the feedback of the customer who included in the process, it is worked towards marketing optimization, design and financial optimization.

Keywords: agile, design, missile, scrum

Procedia PDF Downloads 164
674 Computational Determination of the Magneto Electronic Properties of Ce₁₋ₓCuₓO₂ (x=12.5%): Emerging Material for Spintronic Devices

Authors: Aicha Bouhlala, Sabah Chettibi

Abstract:

Doping CeO₂ with transition metals is an effective way of tuning its properties. In the present work, we have performed self-consistent ab-initio calculation using the full-potential linearized augmented plane-wave method (FP-LAPW), based on the density functional theory (DFT) as implemented in the Wien2k simulation code to study the structural, electronic, and magnetic properties of the compound Ce₁₋ₓCuₓO₂ (x=12.5%) fluorite type oxide and to explore the effects of dopant Cu in ceria. The exchange correlation potential has been treated using the Perdew-Burke-Eenzerhof revised of solid (PBEsol). In structural properties, the equilibrium lattice constant is observed for the compound, which exists within the value of 5.382 A°. In electronic properties, the spin-polarized electronic bandstructure elucidates the semiconductor nature of the material in both spin channels, with the compound was observed to have a narrow bandgap on the spin-down configuration (0.162 EV) and bandgap on the spin-up (2.067 EV). Hence, the doped atom Cu plays a vital role in increasing the magnetic moments of the supercell, and the value of the total magnetic moment is found to be 2.99438 μB. Therefore, the compound Cu-doped CeO₂ shows a strong ferromagnetic behavior. The predicted results propose the compound could be a good candidate for spintronics applications.

Keywords: Cu-doped CeO₂, DFT, Wien2k, properties

Procedia PDF Downloads 249
673 Numerical Analysis and Influence of the Parameters on Slope Stability

Authors: Fahim Kahlouche, Alaoua Bouaicha, Sihem Chaîbeddra, Sid-Ali Rafa, Abdelhamid Benouali

Abstract:

A designing of a structure requires its realization on rough or sloping ground. Besides the problem of the stability of the landslide, the behavior of the foundations that are bearing the structure is influenced by the destabilizing effect of the ground’s slope. This article focuses on the analysis of the slope stability exposed to loading by introducing the different factors influencing the slope’s behavior on the one hand, and on the influence of this slope on the foundation’s behavior on the other hand. This study is about the elastoplastic modelization using FLAC 2D. This software is based on the finite difference method, which is one of the older methods of numeric resolution of differential equations system with initial and boundary conditions. It was developed for the geotechnical simulation calculation. The aim of this simulation is to demonstrate the notable effect of shear modulus « G », cohesion « C », inclination angle (edge) « β », and distance between the foundation and the head of the slope on the stability of the slope as well as the stability of the foundation. In our simulation, the slope is constituted by homogenous ground. The foundation is considered as rigid/hard; therefore, the loading is made by the application of the vertical strengths on the nodes which represent the contact between the foundation and the ground. 

Keywords: slope, shallow foundation, numeric method, FLAC 2D

Procedia PDF Downloads 279
672 Assessment of the Impact of CSR on the Business Performance of Australian Banks

Authors: Montoya C.A., Erina J., Erina I.

Abstract:

The purpose of this research is to assess the performance and impact of CSR on business in the banking sector in Australia by applying the financial indicators of 20 ASX banks for the period from 2016-2017. The authors carried out CSR assessment in several stages of research: 1) gathering the nonfinancial and financial indicators of 20 ASX listed banks (available were only 16) from the annual reports of Australian banks for 2016 and 2017; 2) calculation of bank performance indicators using such financial indicators as return on assets (ROA), return on equity (ROE), efficiency ratio and net interest margin; 3) analysis of financial data using cross-sectional regression and answers to the research questions. Based on the obtained research results, the authors obtained answers to the initially raised research questions and came to a conclusion that Q1 - Insignificant positive coefficient result - slight positive relationship between CSR disclosure and business performance 2016; Q2 - Insignificant negative coefficient result - slight negative relationship between CSR disclosure and business performance 2017; Q3 - Insignificant positive coefficient result - slight positive relationship between CSR disclosure and business performance.

Keywords: Australia, banks, business performance, CSR

Procedia PDF Downloads 67
671 Estimating View-Through Ad Attribution from User Surveys Using Convex Optimization

Authors: Yuhan Lin, Rohan Kekatpure, Cassidy Yeung

Abstract:

In Digital Marketing, robust quantification of View-through attribution (VTA) is necessary for evaluating channel effectiveness. VTA occurs when a product purchase is aided by an Ad but without an explicit click (e.g. a TV ad). A lack of a tracking mechanism makes VTA estimation challenging. Most prevalent VTA estimation techniques rely on post-purchase in-product user surveys. User surveys enable the calculation of channel multipliers, which are the ratio of the view-attributed to the click-attributed purchases of each marketing channel. Channel multipliers thus provide a way to estimate the unknown VTA for a channel from its known click attribution. In this work, we use Convex Optimization to compute channel multipliers in a way that enables a mathematical encoding of the expected channel behavior. Large fluctuations in channel attributions often result from overfitting the calculations to user surveys. Casting channel attribution as a Convex Optimization problem allows an introduction of constraints that limit such fluctuations. The result of our study is a distribution of channel multipliers across the entire marketing funnel, with important implications for marketing spend optimization. Our technique can be broadly applied to estimate Ad effectiveness in a privacy-centric world that increasingly limits user tracking.

Keywords: digital marketing, survey analysis, operational research, convex optimization, channel attribution

Procedia PDF Downloads 185
670 Theoretical and Experimental Electrostatic Parameters Determination of 4-Methyl-N-[(5- Nitrothiophen-2-Ylmethylidene)] Aniline Compound

Authors: N. Boukabcha, Y. Megrouss, N. Benhalima, S. Yahiaoui, A. Chouaih, F. Hamzaoui

Abstract:

We present the electron density analysis of organic compound 4-methyl-N-[(5- nitrothiophen-2-ylmethylidene)] aniline with chemical formula C12H10N2O2S. Indeed, determining the electrostatic properties of nonlinear optical organic compounds requires knowledge of the distribution of the electron density with high precision. On the other hand, a structural analysis is performed. Two methods are used to obtain the structure, X-ray diffraction and theoretical calculation with density functional theory (DFT). The electron density study is performed using the Mopro program1503 based on the multipolar model of Hansen and Coppens. Electron density analysis allows determination of the value and orientation of the dipole moment. The net atomic charges, electrostatic potential and the molecular dipole moment have been determined in order to understand the nature of inter- and intramolecular charge transfer. The study reveals the nature of intermolecular interactions including charge transfer and hydrogen bonds in the title compound. Crystallographic data: monoclinic system - space group P21 / n. Celle parameters: a = 4.7606 (4) Å, b = 22.415 (2) Å, c = 10.7008 (15) Å, β = 92.566 (13) 0, V = 1140.7 (2) Å3, Z = 4, R = 0.0034 for 2693 observed reflections.

Keywords: electron density, dipole moment, electrostatic potential, DFT, Mopro

Procedia PDF Downloads 308
669 Advanced Hybrid Particle Swarm Optimization for Congestion and Power Loss Reduction in Distribution Networks with High Distributed Generation Penetration through Network Reconfiguration

Authors: C. Iraklis, G. Evmiridis, A. Iraklis

Abstract:

Renewable energy sources and distributed power generation units already have an important role in electrical power generation. A mixture of different technologies penetrating the electrical grid, adds complexity in the management of distribution networks. High penetration of distributed power generation units creates node over-voltages, huge power losses, unreliable power management, reverse power flow and congestion. This paper presents an optimization algorithm capable of reducing congestion and power losses, both described as a function of weighted sum. Two factors that describe congestion are being proposed. An upgraded selective particle swarm optimization algorithm (SPSO) is used as a solution tool focusing on the technique of network reconfiguration. The upgraded SPSO algorithm is achieved with the addition of a heuristic algorithm specializing in reduction of power losses, with several scenarios being tested. Results show significant improvement in minimization of losses and congestion while achieving very small calculation times.

Keywords: congestion, distribution networks, loss reduction, particle swarm optimization, smart grid

Procedia PDF Downloads 440
668 Inversion of Electrical Resistivity Data: A Review

Authors: Shrey Sharma, Gunjan Kumar Verma

Abstract:

High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.

Keywords: inversion, limitations, optimization, resistivity

Procedia PDF Downloads 357
667 Infinite Impulse Response Digital Filters Design

Authors: Phuoc Si Nguyen

Abstract:

Infinite impulse response (IIR) filters can be designed from an analogue low pass prototype by using frequency transformation in the s-domain and bilinear z-transformation with pre-warping frequency; this method is known as frequency transformation from the s-domain to the z-domain. This paper will introduce a new method to transform an IIR digital filter to another type of IIR digital filter (low pass, high pass, band pass, band stop or narrow band) using a technique based on inverse bilinear z-transformation and inverse matrices. First, a matrix equation is derived from inverse bilinear z-transformation and Pascal’s triangle. This Low Pass Digital to Digital Filter Pascal Matrix Equation is used to transform a low pass digital filter to other digital filter types. From this equation and the inverse matrix, a Digital to Digital Filter Pascal Matrix Equation can be derived that is able to transform any IIR digital filter. This paper will also introduce some specific matrices to replace the inverse matrix, which is difficult to determine due to the larger size of the matrix in the current method. This will make computing and hand calculation easier when transforming from one IIR digital filter to another in the digital domain.

Keywords: bilinear z-transformation, frequency transformation, inverse bilinear z-transformation, IIR digital filters

Procedia PDF Downloads 414
666 Landslide Susceptibility Mapping: A Comparison between Logistic Regression and Multivariate Adaptive Regression Spline Models in the Municipality of Oudka, Northern of Morocco

Authors: S. Benchelha, H. C. Aoudjehane, M. Hakdaoui, R. El Hamdouni, H. Mansouri, T. Benchelha, M. Layelmam, M. Alaoui

Abstract:

The logistic regression (LR) and multivariate adaptive regression spline (MarSpline) are applied and verified for analysis of landslide susceptibility map in Oudka, Morocco, using geographical information system. From spatial database containing data such as landslide mapping, topography, soil, hydrology and lithology, the eight factors related to landslides such as elevation, slope, aspect, distance to streams, distance to road, distance to faults, lithology map and Normalized Difference Vegetation Index (NDVI) were calculated or extracted. Using these factors, landslide susceptibility indexes were calculated by the two mentioned methods. Before the calculation, this database was divided into two parts, the first for the formation of the model and the second for the validation. The results of the landslide susceptibility analysis were verified using success and prediction rates to evaluate the quality of these probabilistic models. The result of this verification was that the MarSpline model is the best model with a success rate (AUC = 0.963) and a prediction rate (AUC = 0.951) higher than the LR model (success rate AUC = 0.918, rate prediction AUC = 0.901).

Keywords: landslide susceptibility mapping, regression logistic, multivariate adaptive regression spline, Oudka, Taounate

Procedia PDF Downloads 183
665 The Effect of Environmental, Social, and Governance (ESG) Disclosure on Firms’ Credit Rating and Capital Structure

Authors: Heba Abdelmotaal

Abstract:

This paper explores the impact of the extent of a company's environmental, social, and governance (ESG) disclosure on credit rating and capital structure. The analysis is based on a sample of 202 firms from the 350 FTSE firms over the period of 2008-2013. ESG disclosure score is measured using Proprietary Bloomberg score based on the extent of a company's Environmental, Social, and Governance (ESG) disclosure. The credit rating is measured by The QuiScore, which is a measure of the likelihood that a company will become bankrupt in the twelve months following the date of calculation. The Capital Structure is measured by long term debt ratio. Two hypotheses are test using panel data regression. The results suggested that the higher degree of ESG disclosure leads to better credit rating. There is significant negative relationship between ESG disclosure and the long term debit percentage. The paper includes implications for the transparency which is resulting of the ESG disclosure could support the Monitoring Function. The monitoring role of disclosure is the increasing in the transparency of the credit rating agencies, also it could affect on managers’ actions. This study provides empirical evidence on the material of ESG disclosure on credit ratings changes and the firms’ capital decision making.

Keywords: capital structure, credit rating agencies, ESG disclosure, panel data regression

Procedia PDF Downloads 355