Search results for: inverse distance weighted method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20861

Search results for: inverse distance weighted method

18191 Rating and Generating Sudoku Puzzles Based on Constraint Satisfaction Problems

Authors: Bahare Fatemi, Seyed Mehran Kazemi, Nazanin Mehrasa

Abstract:

Sudoku is a logic-based combinatorial puzzle game which people in different ages enjoy playing it. The challenging and addictive nature of this game has made it a ubiquitous game. Most magazines, newspapers, puzzle books, etc. publish lots of Sudoku puzzles every day. These puzzles often come in different levels of difficulty so that all people, from beginner to expert, can play the game and enjoy it. Generating puzzles with different levels of difficulty is a major concern of Sudoku designers. There are several works in the literature which propose ways of generating puzzles having a desirable level of difficulty. In this paper, we propose a method based on constraint satisfaction problems to evaluate the difficulty of the Sudoku puzzles. Then, we propose a hill climbing method to generate puzzles with different levels of difficulty. Whereas other methods are usually capable of generating puzzles with only few number of difficulty levels, our method can be used to generate puzzles with arbitrary number of different difficulty levels. We test our method by generating puzzles with different levels of difficulty and having a group of 15 people solve all the puzzles and recording the time they spend for each puzzle.

Keywords: constraint satisfaction problem, generating Sudoku puzzles, hill climbing

Procedia PDF Downloads 398
18190 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text

Authors: Duncan Wallace, M-Tahar Kechadi

Abstract:

In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.

Keywords: artificial neural networks, data-mining, machine learning, medical informatics

Procedia PDF Downloads 124
18189 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems

Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu

Abstract:

In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.

Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP

Procedia PDF Downloads 29
18188 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 159
18187 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata

Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen

Abstract:

This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.

Keywords: composite, blending, optimization, lamination parameters

Procedia PDF Downloads 222
18186 A New Conjugate Gradient Method with Guaranteed Descent

Authors: B. Sellami, M. Belloufi

Abstract:

Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but also has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed.

Keywords: unconstrained optimization, conjugate gradient method, line search, global convergence

Procedia PDF Downloads 447
18185 Periodic Topology and Size Optimization Design of Tower Crane Boom

Authors: Wu Qinglong, Zhou Qicai, Xiong Xiaolei, Zhang Richeng

Abstract:

In order to achieve the layout and size optimization of the web members of tower crane boom, a truss topology and cross section size optimization method based on continuum is proposed considering three typical working conditions. Firstly, the optimization model is established by replacing web members with web plates. And the web plates are divided into several sub-domains so that periodic soft kill option (SKO) method can be carried out for topology optimization of the slender boom. After getting the optimized topology of web plates, the optimized layout of web members is formed through extracting the principal stress distribution. Finally, using the web member radius as design variable, the boom compliance as objective and the material volume of the boom as constraint, the cross section size optimization mathematical model is established. The size optimization criterion is deduced from the mathematical model by Lagrange multiplier method and Kuhn-Tucker condition. By comparing the original boom with the optimal boom, it is identified that this optimization method can effectively lighten the boom and improve its performance.

Keywords: tower crane boom, topology optimization, size optimization, periodic, SKO, optimization criterion

Procedia PDF Downloads 549
18184 Seismic Vulnerability of Structures Designed in Accordance with the Allowable Stress Design and Load Resistant Factor Design Methods

Authors: Mohammadreza Vafaei, Amirali Moradi, Sophia C. Alih

Abstract:

The method selected for the design of structures not only can affect their seismic vulnerability but also can affect their construction cost. For the design of steel structures, two distinct methods have been introduced by existing codes, namely allowable stress design (ASD) and load resistant factor design (LRFD). This study investigates the effect of using the aforementioned design methods on the seismic vulnerability and construction cost of steel structures. Specifically, a 20-story building equipped with special moment resisting frame and an eccentrically braced system was selected for this study. The building was designed for three different intensities of peak ground acceleration including 0.2 g, 0.25 g, and 0.3 g using the ASD and LRFD methods. The required sizes of beams, columns, and braces were obtained using response spectrum analysis. Then, the designed frames were subjected to nine natural earthquake records which were scaled to the designed response spectrum. For each frame, the base shear, story shears, and inter-story drifts were calculated and then were compared. Results indicated that the LRFD method led to a more economical design for the frames. In addition, the LRFD method resulted in lower base shears and larger inter-story drifts when compared with the ASD method. It was concluded that the application of the LRFD method not only reduced the weights of structural elements but also provided a higher safety margin against seismic actions when compared with the ASD method.

Keywords: allowable stress design, load resistant factor design, nonlinear time history analysis, seismic vulnerability, steel structures

Procedia PDF Downloads 263
18183 Estimating Multidimensional Water Poverty Index in India: The Alkire Foster Approach

Authors: Rida Wanbha Nongbri, Sabuj Kumar Mandal

Abstract:

The Sustainable Development Goals (SDGs) for 2016-2030 were adopted in response to Millennium Development Goals (MDGs) which focused on access to sustainable water and sanitations. For over a decade, water has been a significant subject that is explored in various facets of life. Our day-to-day life is significantly impacted by water poverty at the socio-economic level. Reducing water poverty is an important policy challenge, particularly in emerging economies like India, owing to its population growth, huge variation in topology and climatic factors. To design appropriate water policies and its effectiveness, a proper measurement of water poverty is essential. In this backdrop, this study uses the Alkire Foster (AF) methodology to estimate a multidimensional water poverty index for India at the household level. The methodology captures several attributes to understand the complex issues related to households’ water deprivation. The study employs two rounds of Indian Human Development Survey data (IHDS 2005 and 2012) which focuses on 4 dimensions of water poverty including water access, water quantity, water quality, and water capacity, and seven indicators capturing these four dimensions. In order to quantify water deprivation at the household level, an AF dual cut-off counting method is applied and Multidimensional Water Poverty Index (MWPI) is calculated as the product of Headcount Ratio (Incidence) and average share of weighted dimension (Intensity). The results identify deprivation across all dimensions at the country level and show that a large proportion of household in India is deprived of quality water and suffers from water access in both 2005 and 2012 survey rounds. The comparison between the rural and urban households shows that higher ratio of the rural households are multidimensionally water poor as compared to their urban counterparts. Among the four dimensions of water poverty, water quality is found to be the most significant one for both rural and urban households. In 2005 round, almost 99.3% of households are water poor for at least one of the four dimensions, and among the water poor households, the intensity of water poverty is 54.7%. These values do not change significantly in 2012 round, but we could observe significance differences across the dimensions. States like Bihar, Tamil Nadu, and Andhra Pradesh are ranked the most in terms of MWPI, whereas Sikkim, Arunachal Pradesh and Chandigarh are ranked the lowest in 2005 round. Similarly, in 2012 round, Bihar, Uttar Pradesh and Orissa rank the highest in terms of MWPI, whereas Goa, Nagaland and Arunachal Pradesh rank the lowest. The policy implications of this study can be multifaceted. It can urge the policy makers to focus either on the impoverished households with lower intensity levels of water poverty to minimize total number of water poor households or can focus on those household with high intensity of water poverty to achieve an overall reduction in MWPI.

Keywords: .alkire-foster (AF) methodology, deprivation, dual cut-off, multidimensional water poverty index (MWPI)

Procedia PDF Downloads 67
18182 An Online Mastery Learning Method Based on a Dynamic Formative Evaluation

Authors: Jeongim Kang, Moon Hee Kim, Seong Baeg Kim

Abstract:

This paper proposes a novel e-learning model that is based on a dynamic formative evaluation. On evaluating the existing format of e-learning, conditions regarding repetitive learning to achieve mastery, causes issues for learners to lose tension and become neglectful of learning. The dynamic formative evaluation proposed is able to supplement limitation of the existing approaches. Since a repetitive learning method does not provide a perfect feedback, this paper puts an emphasis on the dynamic formative evaluation that is able to maximize learning achievement. Through the dynamic formative evaluation, the instructor is able to refer to the evaluation result when making estimation about the learner. To show the flow chart of learning, based on the dynamic formative evaluation, the model proves its effectiveness and validity.

Keywords: online learning, dynamic formative evaluation, mastery learning, repetitive learning method, learning achievement

Procedia PDF Downloads 503
18181 Ultimate Shear Resistance of Plate Girders Part 2- Höglund Theory

Authors: Ahmed S. Elamary

Abstract:

Ultimate shear resistance (USR) of slender plate girders can be predicted theoretically using Cardiff theory or Hӧglund theory. This paper will be concerned with predicting the USR using Hӧglund theory and EC3. Two main factors can affect the USR, the panel width “b” and the web depth “d”, consequently, the panel aspect ratio (b/d) has to be identified by limits. In most of the previous study, there is no limit for panel aspect ratio indicated. In this paper theoretical analysis has been conducted to study the effect of (b/d) on the USR. The analysis based on ninety-six test results of steel plate girders subjected to shear executed and collected by others. New formula proposed to predict the percentage of the distance between the plastic hinges form in the flanges “c” to panel width “b”. Conservative limits of (c/b) have been suggested to get a consistent value of USR.

Keywords: ultimate shear resistance, plate girder, Hӧglund’s theory, EC3

Procedia PDF Downloads 404
18180 Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data

Authors: Salam Khalifa, Naveed Ahmed

Abstract:

We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.

Keywords: 3D video, 3D animation, RGB-D video, temporally coherent 3D animation

Procedia PDF Downloads 368
18179 The Estimation Method of Inter-Story Drift for Buildings Based on Evolutionary Learning

Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park

Abstract:

The seismic responses-based structural health monitoring system has been performed to reduce seismic damage. The inter-story drift ratio which is the major index of the seismic capacity assessment is employed for estimating the seismic damage of buildings. Meanwhile, seismic response analysis to estimate the structural responses of building demands significantly high computational cost due to increasing number of high-rise and large buildings. To estimate the inter-story drift ratio of buildings from the earthquake efficiently, this paper suggests the estimation method of inter-story drift for buildings using an artificial neural network (ANN). In the method, the radial basis function neural network (RBFNN) is integrated with optimization algorithm to optimize the variable through evolutionary learning that refers to evolutionary radial basis function neural network (ERBFNN). The estimation method estimates the inter-story drift without seismic response analysis when the new earthquakes are subjected to buildings. The effectiveness of the estimation method is verified through a simulation using multi-degree of freedom system.

Keywords: structural health monitoring, inter-story drift ratio, artificial neural network, radial basis function neural network, genetic algorithm

Procedia PDF Downloads 324
18178 Estimating 3D-Position of a Stationary Random Acoustic Source Using Bispectral Analysis of 4-Point Detected Signals

Authors: Katsumi Hirata

Abstract:

To develop the useful acoustic environmental recognition system, the method of estimating 3D-position of a stationary random acoustic source using bispectral analysis of 4-point detected signals is proposed. The method uses information about amplitude attenuation and propagation delay extracted from amplitude ratios and angles of auto- and cross-bispectra of the detected signals. It is expected that using bispectral analysis affects less influence of Gaussian noises than using conventional power spectral one. In this paper, the basic principle of the method is mentioned first, and its validity and features are considered from results of the fundamental experiments assumed ideal circumstances.

Keywords: 4-point detection, a stationary random acoustic source, auto- and cross-bispectra, estimation of 3D-position

Procedia PDF Downloads 354
18177 Mine Project Evaluations in the Rising of Uncertainty: Real Options Analysis

Authors: I. Inthanongsone, C. Drebenstedt, J. C. Bongaerts, P. Sontamino

Abstract:

The major concern in evaluating the value of mining projects related to the deficiency of the traditional discounted cash flow (DCF) method. This method does not take uncertainties into account and, hence it does not allow for an economic assessment of managerial flexibility and operational adaptability, which are increasingly determining long-term corporate success. Such an assessment can be performed with the real options valuation (ROV) approach, since it allows for a comparative evaluation of unforeseen uncertainties in a project life cycle. This paper presents an economic evaluation model for open pit mining projects based on real options valuation approach. Uncertainties in the model are caused by metal prices and cost uncertainties and the system dynamics (SD) modeling method is used to structure and solve the real options model. The model is applied to a case study. It can be shown that that managerial flexibility reacting to uncertainties may create additional value to a mining project in comparison to the outcomes of a DCF method. One important insight for management dealing with uncertainty is seen in choosing the optimal time to exercise strategic options.

Keywords: DCF methods, ROV approach, system dynamics modeling methods, uncertainty

Procedia PDF Downloads 495
18176 Optimization of Process Parameters by Using Taguchi Method for Bainitic Steel Machining

Authors: Vinay Patil, Swapnil Kekade, Ashish Supare, Vinayak Pawar, Shital Jadhav, Rajkumar Singh

Abstract:

In recent days, bainitic steel is used in automobile and non-automobile sectors due to its high strength. Bainitic steel is difficult to machine because of its high hardness, hence in this paper machinability of bainitic steel is studied by using Taguchi design of experiments (DOE) approach. Convectional turning experiments were done by using L16 orthogonal array for three input parameters viz. cutting speed, depth of cut and feed. The Taguchi method is applied to study the performance characteristics of machining parameters with surface roughness (Ra), cutting force and tool wear rate. By using Taguchi analysis, optimized process parameters for best surface finish and minimum cutting forces were analyzed.

Keywords: conventional turning, Taguchi method, S/N ratio, bainitic steel machining

Procedia PDF Downloads 327
18175 Validity of Clinical Disease Activity Index (CDAI) to Evaluate the Disease Activity of Rheumatoid Arthritis Patients in Sri Lanka: A Prospective Follow up Study Based on Newly Diagnosed Patients

Authors: Keerthie Dissanayake, Chandrika Jayasinghe, Priyani Wanigasekara, Jayampathy Dissanayake, Ajith Sominanda

Abstract:

The routine use of Disease Activity Score-28 (DAS28) to assess the disease activity in rheumatoid arthritis (RA) is limited due to its dependency on laboratory investigations and the complex calculations involved. In contrast, the clinical disease activity index (CDAI) is simple to calculate, which makes the "treat to target" strategy for the management of RA more practical. We aimed to assess the validity of CDAI compared to DAS28 in RA patients in Sri Lanka. A total of 103 newly diagnosed RA patients were recruited, and their disease activity was calculated using DAS 28 and CDAI during the first visit to the clinic (0 months) and re-assessed at 4 and 9 months of the follow-up visits. The validity of the CDAI, compared to DAS 28, was evaluated. Patients had a female preponderance (6:1) and a short symptom duration (mean = 6.33 months). The construct validity of CDAI, as assessed by Cronbach's α test, was 0.868. Convergent validity was assessed by correlation and Kappa statistics. Strong positive correlations were observed between CDAI and DAS 28 at the baseline (0 months), 4, and 9 months of evaluation (Spearman's r = 0.9357, 0.9354, 0.9106, respectively). Moderate-good inter-rater agreements between the DAS-28 and CDAI were observed (Weighted kappa of 0.660, 0.519, and 0.741 at 0, 4, and 9 months respectively). Discriminant validity, as assessed by ROC curves at 0, 4th, and 9th months of the evaluation, showed the area under the curve (AUC) of 0.958, 0.985, and 0.914, respectively. The suggested cut-off points for different CDAI disease activity categories according to ROC curves were ≤ 2 (Remission), >2 to ≤ 5 (low), >5 to ≤ 18 (moderate), > 18 (high). These findings indicate that the CDAI has good concordance with DAS 28 in assessing the disease activity in RA patients in this study sample.

Keywords: rheumatoid arthritis, CDAI, disease activity, Sri Lanka, validation

Procedia PDF Downloads 147
18174 Airway Resistance Evaluation by Respiratory İnductive Plethysmography in Subjects with Airway Obstructions

Authors: Aicha Laouani, Sonia Rouatbi, Saad Saguem, Gila Benchetrit, Pascale calabrese

Abstract:

A new approach based on respiratory inductive plethysmography (RIP) signal analysis has been used for bronchoconstriction changes evaluation in 50 healthy controls and in 44 adults with moderate bronchial obstruction treated with a bronchodilatation protocol. Thoracic and abdominal motions were recorded ( 5 min) by RIP. For each recording the thoracoabdominal signals were analysed and a mean distance (D) was calculated. Airway resistance (Raw) and spirometric data were measured with a body plethysmograph. The results showed that both D and Raw were higher in subjects compared to the healthy group. Significant decreases of D and Raw were also observed after bronchodilatation in the obstructive group. There was also a positive and a significant correlation between D and Raw in subjects before and after bronchodilatation. This D calculated from RIP Signals could be used as a non invasive tool for continuous monitoring of bronchoconstriction changes.

Keywords: airway resistance, bronchoconstriction, thorax, respiratory inductive plethysmography

Procedia PDF Downloads 333
18173 A Novel Model for Saturation Velocity Region of Graphene Nanoribbon Transistor

Authors: Mohsen Khaledian, Razali Ismail, Mehdi Saeidmanesh, Mahdiar Hosseinghadiry

Abstract:

A semi-analytical model for impact ionization coefficient of graphene nanoribbon (GNR) is presented. The model is derived by calculating probability of electrons reaching ionization threshold energy Et and the distance traveled by electron gaining Et. In addition, ionization threshold energy is semi-analytically modeled for GNR. We justify our assumptions using analytic modeling and comparison with simulation results. Gaussian simulator together with analytical modeling is used in order to calculate ionization threshold energy and Kinetic Monte Carlo is employed to calculate ionization coefficient and verify the analytical results. Finally, the profile of ionization is presented using the proposed models and simulation and the results are compared with that of silicon.

Keywords: nanostructures, electronic transport, semiconductor modeling, systems engineering

Procedia PDF Downloads 469
18172 Attachment Systems and Psychotherapy: An Internal Secure Caregiver to Heal and Protect the Parts of Our Clients: InCorporer Method

Authors: Julien Baillet

Abstract:

In light of 30 years of scientific research, InCorporer Method was created in 2019 as a new approach to heal traumatic, developmental, and dissociative injuries. Following natural nervous system functions, InCorporer aims to heal, develop, and update the different defensive mammalian subsystems: fight, flight, freeze, feign death, cry for help, & energy regulator. The dimensions taken into account are: (i) Heal the traumatic injuries who are still bleeding, (ii) Develop the systems that never received the security, attention, and affection they needed. (iii) Update the parts that stayed stuck in the past, ignoring for too long that they are out of danger now. Through the Present Part and its caregiving skills, InCorporer method enables a balanced, soothed, and collaborative personality system. To be as integrative as possible, InCorporer method has been designed according to several fields of research, such as structural dissociation theory, attachment theory, and information processing theory. In this paper, the author presents how the internal caregiver is developed and trained to heal all the different parts/subsystems of our clients through mindful attention and reflex movement integration.

Keywords: PTSD, attachment, dissociation, part work

Procedia PDF Downloads 68
18171 Efficient Method for Inducing Embryos from Isolated Microspores of Durum Wheat

Authors: Zelikha Labbani

Abstract:

Durum wheat represents an attractive species to study androgenesis via isolated microspore culture in order to increase the efficiency of androgenic yield in recalcitrant species such as in induction embryogenesis. We describe here an efficient method for inducing embryos from isolated microspores of durum wheat. It is shown that this method, associated with cold alone or cold plus mannitol pretreatment, or mannitol alone of the spikes kept within their sheath leaves during different times, has significant positive effects on embryo production. The aim of this study was, therefore, to test the effect of mannitol 0,3M and cold pretreatment on the quality and quantity of embryos produced from microspore culture from wheat cultivars.

Keywords: in vitro embryogenesis, isolated microspores culture, durum wheat, pretreatments, mannitol 0.3m, cold pretreatment

Procedia PDF Downloads 52
18170 Tractography Analysis of the Evolutionary Origin of Schizophrenia

Authors: Asmaa Tahiri, Mouktafi Amine

Abstract:

A substantial number of traditional medical research has been put forward to managing and treating mental disorders. At the present time, to our best knowledge, it is believed that fundamental understanding of the underlying causes of the majority psychological disorders needs to be explored further to inform early diagnosis, managing symptoms and treatment. The emerging field of evolutionary psychology is a promising prospect to address the origin of mental disorders, potentially leading to more effective treatments. Schizophrenia as a topical mental disorder has been linked to the evolutionary adaptation of the human brain represented in the brain connectivity and asymmetry directly linked to humans higher brain cognition in contrast to other primates being our direct living representation of the structure and connectivity of our earliest common African ancestors. As proposed in the evolutionary psychology scientific literature the pathophysiology of schizophrenia is expressed and directly linked to altered connectivity between the Hippocampal Formation (HF) and Dorsolateral Prefrontal Cortex (DLPFC). This research paper presents the results of the use of tractography analysis using multiple open access Diffusion Weighted Imaging (DWI) datasets of healthy subjects, schizophrenia-affected subjects and primates to illustrate the relevance of the aforementioned brain regions connectivity and the underlying evolutionary changes in the human brain. Deterministic fiber tracking and streamline analysis were used to generate connectivity matrices from the DWI datasets overlaid to compute distances and highlight disconnectivity patterns in conjunction with other fiber tracking metrics; Fractional Anisotropy (FA), Mean Diffusivity (MD) and Radial Diffusivity (RD).

Keywords: tractography, evolutionary psychology, schizophrenia, brain connectivity

Procedia PDF Downloads 62
18169 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies

Authors: Jaballah Jamil

Abstract:

KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.

Keywords: KLD social rating agency, investors' perception, investment decision, financial performance

Procedia PDF Downloads 433
18168 Rationalized Haar Transforms Approach to Design of Observer for Control Systems with Unknown Inputs

Authors: Joon-Hoon Park

Abstract:

The fundamental concept of observability is important in both theoretical and practical points of modern control systems. In modern control theory, a control system has criteria for determining the design solution exists for the system parameters and design objectives. The idea of observability relates to the condition of observing or estimating the state variables from the output variables that is generally measurable. To design closed-loop control system, the practical problems of implementing the feedback of the state variables must be considered and implementing state feedback control problem has been existed in this case. All the state variables are not available, so it is requisite to design and implement an observer that will estimate the state variables form the output parameters. However sometimes unknown inputs are presented in control systems as practical cases. This paper presents a design method and algorithm for observer of control system with unknown input parameters based on Rationalized Haar transform. The proposed method is more advantageous than the other numerical method.

Keywords: orthogonal functions, rationalized Haar transforms, control system observer, algebraic method

Procedia PDF Downloads 363
18167 Valuing Social Sustainability in Agriculture: An Approach Based on Social Outputs’ Shadow Prices

Authors: Amer Ait Sidhoum

Abstract:

Interest in sustainability has gained ground among practitioners, academics and policy-makers due to growing stakeholders’ awareness of environmental and social concerns. This is particularly true for agriculture. However, relatively little research has been conducted on the quantification of social sustainability and the contribution of social issues to the agricultural production efficiency. This research's main objective is to propose a method for evaluating prices of social outputs, more precisely shadow prices, by allowing for the stochastic nature of agricultural production that is to say for production uncertainty. In this article, the assessment of social outputs’ shadow prices is conducted within the methodological framework of nonparametric Data Envelopment Analysis (DEA). An output-oriented directional distance function (DDF) is implemented to represent the technology of a sample of Catalan arable crop farms and derive the efficiency scores the overall production technology of our sample is assumed to be the intersection of two different sub-technologies. The first sub-technology models the production of random desirable agricultural outputs, while the second sub-technology reflects the social outcomes from agricultural activities. Once a nonparametric production technology has been represented, the DDF primal approach can be used for efficiency measurement, while shadow prices are drawn from the dual representation of the DDF. Computing shadow prices is a method to assign an economic value to non-marketed social outcomes. Our research uses cross sectional, farm-level data collected in 2015 from a sample of 180 Catalan arable crop farms specialized in the production of cereals, oilseeds and protein (COP) crops. Our results suggest that our sample farms show high performance scores, from 85% for the bad state of nature to 88% for the normal and ideal crop growing conditions. This suggests that farm performance is increasing with an improvement in crop growth conditions. Results also show that average shadow prices of desirable state-contingent output and social outcomes for efficient and inefficient farms are positive, suggesting that the production of desirable marketable outputs and of non-marketable outputs makes a positive contribution to the farm production efficiency. Results also indicate that social outputs’ shadow prices are contingent upon the growing conditions. The shadow prices follow an upward trend as crop-growing conditions improve. This finding suggests that these efficient farms prefer to allocate more resources in the production of desirable outputs than of social outcomes. To our knowledge, this study represents the first attempt to compute shadow prices of social outcomes while accounting for the stochastic nature of the production technology. Our findings suggest that the decision-making process of the efficient farms in dealing with social issues are stochastic and strongly dependent on the growth conditions. This implies that policy-makers should adjust their instruments according to the stochastic environmental conditions. An optimal redistribution of rural development support, by increasing the public payment with the improvement in crop growth conditions, would likely enhance the effectiveness of public policies.

Keywords: data envelopment analysis, shadow prices, social sustainability, sustainable farming

Procedia PDF Downloads 117
18166 System Identification in Presence of Outliers

Authors: Chao Yu, Qing-Guo Wang, Dan Zhang

Abstract:

The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.

Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising

Procedia PDF Downloads 303
18165 Energy Efficient Alternate Hydraulic System Called TejHydroLift

Authors: Tejinder Singh

Abstract:

This paper describes a new more efficient Hydraulic System which uses lesser work to produce more output. Conventional Hydraulic System like Hydraulic Lifts and Rams use lots of water to be pumped to produce output. TejHydroLift will do the equal amount of force with lesser input of water. The paper will show that force applied can be increased manifold without requiring to move smaller force by more distance which used to be required in Conventional Hydraulic Lifts. The paper describes one of the configurations of TejHydroLift System called “Slim Antenna TejHydroLift Configuration”. The TejHydroLift uses lesser water and hence demands lesser work to be performed to move the same load.

Keywords: alternate, hydraulic system, efficient, TejHydroLift

Procedia PDF Downloads 256
18164 Characteristic Study on Conventional and Soliton Based Transmission System

Authors: Bhupeshwaran Mani, S. Radha, A. Jawahar, A. Sivasubramanian

Abstract:

Here, we study the characteristic feature of conventional (ON-OFF keying) and soliton based transmission system. We consider 20 Gbps transmission system implemented with Conventional Single Mode Fiber (C-SMF) to examine the role of Gaussian pulse which is the characteristic of conventional propagation and hyperbolic-secant pulse which is the characteristic of soliton propagation in it. We note the influence of these pulses with respect to different dispersion lengths and soliton period in conventional and soliton system, respectively, and evaluate the system performance in terms of quality factor. From the analysis, we could prove that the soliton pulse has more consistent performance even for long distance without dispersion compensation than the conventional system as it is robust to dispersion. For the length of transmission of 200 Km, soliton system yielded Q of 33.958 while the conventional system totally exhausted with Q=0.

Keywords: dispersion length, retrun-to-zero (rz), soliton, soliton period, q-factor

Procedia PDF Downloads 339
18163 An Estimating Parameter of the Mean in Normal Distribution by Maximum Likelihood, Bayes, and Markov Chain Monte Carlo Methods

Authors: Autcha Araveeporn

Abstract:

This paper is to compare the parameter estimation of the mean in normal distribution by Maximum Likelihood (ML), Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML estimator is estimated by the average of data, the Bayes method is considered from the prior distribution to estimate Bayes estimator, and MCMC estimator is approximated by Gibbs sampling from posterior distribution. These methods are also to estimate a parameter then the hypothesis testing is used to check a robustness of the estimators. Data are simulated from normal distribution with the true parameter of mean 2, and variance 4, 9, and 16 when the sample sizes is set as 10, 20, 30, and 50. From the results, it can be seen that the estimation of MLE, and MCMC are perceivably different from the true parameter when the sample size is 10 and 20 with variance 16. Furthermore, the Bayes estimator is estimated from the prior distribution when mean is 1, and variance is 12 which showed the significant difference in mean with variance 9 at the sample size 10 and 20.

Keywords: Bayes method, Markov chain Monte Carlo method, maximum likelihood method, normal distribution

Procedia PDF Downloads 352
18162 Ice Load Measurements on Known Structures Using Image Processing Methods

Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.

Keywords: camera calibration, ice detection, ice load measurements, image processing

Procedia PDF Downloads 364