Search results for: threshold method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19479

Search results for: threshold method

18039 Anti-Scale Magnetic Method as a Prevention Method for Calcium Carbonate Scaling

Authors: Maha Salman, Gada Al-Nuwaibit

Abstract:

The effect of anti-scale magnetic method (AMM) in retarding scaling deposition is confirmed by many researchers, to result in new crystal morphology, the crystal which has the tendency to remain suspended more than precipitated. AMM is considered as an economic method when compared to other common methods used for scale prevention in desalination plant as acid treatment and addition of antiscalant. The current project was initiated to evaluate the effectiveness of AMM in preventing calcium carbonate scaling. The AMM was tested at different flow velocities (1.0, 0.5, 0.3, 0.1, and 0.003 m/s), different operating temperatures (50, 70, and 90°C), different feed pH and different magnetic field strength. The results showed that AMM was effective in retarding calcium carbonate scaling deposition, and the performance of AMM depends strongly on the flow velocity. The scaling retention time was found to be affected by the operating temperatures, flow velocity, and magnetic strength (MS), and in general, it was found that as the operating temperatures increased the effectiveness of the AMM in retarding calcium carbonate (CaCO₃) scaling increased.

Keywords: magnetic treatment, field strength, flow velocity, magnetic scale retention time

Procedia PDF Downloads 377
18038 Land Subsidence Monitoring in Semarang and Demak Coastal Area Using Persistent Scatterer Interferometric Synthetic Aperture Radar

Authors: Reyhan Azeriansyah, Yudo Prasetyo, Bambang Darmo Yuwono

Abstract:

Land subsidence is one of the problems that occur in the coastal areas of Java Island, one of which is the Semarang and Demak areas located in the northern region of Central Java. The impact of sea erosion, rising sea levels, soil structure vulnerable and economic development activities led to both these areas often occurs on land subsidence. To know how much land subsidence that occurred in the region needs to do the monitoring carried out by remote sensing methods such as PS-InSAR method. PS-InSAR is a remote sensing technique that is the development of the DInSAR method that can monitor the movement of the ground surface that allows users to perform regular measurements and monitoring of fixed objects on the surface of the earth. PS InSAR processing is done using Standford Method of Persistent Scatterers (StaMPS). Same as the recent analysis technique, Persistent Scatterer (PS) InSAR addresses both the decorrelation and atmospheric problems of conventional InSAR. StaMPS identify and extract the deformation signal even in the absence of bright scatterers. StaMPS is also applicable in areas undergoing non-steady deformation, with no prior knowledge of the variations in deformation rate. In addition, this method can also cover a large area so that the decline in the face of the land can cover all coastal areas of Semarang and Demak. From the PS-InSAR method can be known the impact on the existing area in Semarang and Demak region per year. The PS-InSAR results will also be compared with the GPS monitoring data to determine the difference in land decline that occurs between the two methods. By utilizing remote sensing methods such as PS-InSAR method, it is hoped that the PS-InSAR method can be utilized in monitoring the land subsidence and can assist other survey methods such as GPS surveys and the results can be used in policy determination in the affected coastal areas of Semarang and Demak.

Keywords: coastal area, Demak, land subsidence, PS-InSAR, Semarang, StaMPS

Procedia PDF Downloads 266
18037 Rating and Generating Sudoku Puzzles Based on Constraint Satisfaction Problems

Authors: Bahare Fatemi, Seyed Mehran Kazemi, Nazanin Mehrasa

Abstract:

Sudoku is a logic-based combinatorial puzzle game which people in different ages enjoy playing it. The challenging and addictive nature of this game has made it a ubiquitous game. Most magazines, newspapers, puzzle books, etc. publish lots of Sudoku puzzles every day. These puzzles often come in different levels of difficulty so that all people, from beginner to expert, can play the game and enjoy it. Generating puzzles with different levels of difficulty is a major concern of Sudoku designers. There are several works in the literature which propose ways of generating puzzles having a desirable level of difficulty. In this paper, we propose a method based on constraint satisfaction problems to evaluate the difficulty of the Sudoku puzzles. Then, we propose a hill climbing method to generate puzzles with different levels of difficulty. Whereas other methods are usually capable of generating puzzles with only few number of difficulty levels, our method can be used to generate puzzles with arbitrary number of different difficulty levels. We test our method by generating puzzles with different levels of difficulty and having a group of 15 people solve all the puzzles and recording the time they spend for each puzzle.

Keywords: constraint satisfaction problem, generating Sudoku puzzles, hill climbing

Procedia PDF Downloads 402
18036 Mental Well-Being and Quality of Life: A Comparative Study of Male Leather Tannery and Non-Tannery Workers of Kanpur City, India

Authors: Gyan Kashyap, Shri Kant Singh

Abstract:

Improved mental health can be articulated as a good physical health and quality of life. Mental health plays an important role in survival of any one’s life. In today’s time people living with stress in life due to their personal matters, health problems, unemployment, work environment, living environment, substance use, life style and many more important reasons. Many studies confirmed that the significant proportion of mental health people increasing in India. This study is focused on mental well-being of male leather tannery workers in Kanpur city, India. Environment at work place as well as living environment plays an important health risk factors among leather tannery workers. Leather tannery workers are more susceptible to many chemicals and physical hazards, just because they are liable to be affected by their exposure to lots of hazardous materials and processes during tanning work in very hazardous work environment. The aim of this study to determine the level of mental health disorder and quality of life among male leather tannery and non-tannery workers in Kanpur city, India. This study utilized the primary data from the cross- sectional household study which was conducted from January to June, 2015 on tannery and non-tannery workers as a part of PhD program from the Jajmau area of Kanpur city, India. The sample of 286 tannery and 295 non-tannery workers has been collected from the study area. We have collected information from the workers of age group 15-70 those who were working at the time of survey for at least one year. This study utilized the general health questionnaire (GHQ-12) and work related stress scale to test the mental wellbeing of male tannery and non-tannery workers. By using GHQ-12 and work related stress scale, Polychoric factor analysis method has been used for best threshold and scoring. Some of important question like ‘How would you rate your overall quality of life’ on Likert scale to measure the quality of life, their earnings, education, family size, living condition, household assets, media exposure, health expenditure, treatment seeking behavior and food habits etc. Results from the study revealed that around one third of tannery workers had severe mental health problems then non-tannery workers. Mental health problem shown the statistically significant association with wealth quintile, 56 percent tannery workers had severe mental health problem those belong to medium wealth quintile. And 42 percent tannery workers had moderate mental health problem among those from the low wealth quintile. Work related stress scale found the statistically significant results for tannery workers. Large proportion of tannery and non-tannery workers reported they are unable to meet their basic needs from their earnings and living in worst condition. Important result from the study, tannery workers who were involved in beam house work in tannery (58%) had severe mental health problem. This study found the statistically significant association with tannery work and mental health problem among tannery workers.

Keywords: GHQ-12, mental well-being, factor analysis, quality of life, tannery workers

Procedia PDF Downloads 387
18035 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems

Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu

Abstract:

In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.

Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP

Procedia PDF Downloads 41
18034 Top Skills That Build Cultures at Organizations

Authors: Priyanka Botny Srinath, Alessandro Suglia, Mel McKendrick

Abstract:

Background: Organizational cultural studies integrate sociology and anthropology, portraying man as a creator of symbols, languages, beliefs, and ideologies -essentially, a creator and manager of meaning. In our research, we leverage analytical measures to discern whether an organization embodies a singular culture or a myriad of subcultures. Fast-forward to 2023, our research thesis focuses on digitally measuring culture, coining it as the "Work Culture Quotient." This entails conceptually mapping common experiential patterns to provide executives insights into the digital organization journey, aiding in understanding their current position and identifying future steps. Objectives: Finding the new age skills that help in defining the culture; understand the implications of post-COVID effects; derive a digital framework for measuring skillsets. Method: We conducted two comprehensive Delphi studies to distill essential insights. Delphi 1: Through a thematic analysis of interviews with 20 high-level leaders representing companies across diverse regions -India, Japan, the US, Canada, Morocco, and Uganda- we identified 20 key skills critical for cultivating a robust organizational culture. The skills are -influence, self-confidence, optimism, empathy, leadership, collaboration and cooperation, developing others, commitment, innovativeness, leveraging diversity, change management, team capabilities, self-control, digital communication, emotional awareness, team bonding, communication, problem solving, adaptability, and trustworthiness. Delphi 2: Subject matter experts were asked to complete a questionnaire derived from the thematic analysis in stage 1 to formalise themes and draw consensus amongst experts on the most important workplace skills. Results: The thematic analysis resulted in 20 workplace employee skills being identified. These skills were all included in the Delphi round 2 questionnaire. From the outputs, we analysed the data using R Studio for arriving at agreement and consensus, we also used sum of squares method to compare various agreements to extract various themes with a threshold of 80% agreements. This yielded three themes at over 80% agreement (leadership, collaboration and cooperation, communication) and three further themes at over 60% agreement (commitment, empathy, trustworthiness). From this, we selected five questionnaires to be included in the primary data collection phase, and these will be paired with the digital footprints to provide a workplace culture quotient. Implications: The findings from these studies bear profound implications for decision-makers, revolutionizing their comprehension of organizational culture. Tackling the challenge of mapping the digital organization journey involves innovative methodologies that probe not only external landscapes but also internal cultural dynamics. This holistic approach furnishes decision-makers with a nuanced understanding of their organizational culture and visualizes pivotal skills for employee growth. This clarity enables informed choices resonating with the organization's unique cultural fabric. Anticipated outcomes transcend mere individual cultural measurements, aligning with organizational goals to unveil a comprehensive view of culture, exposing artifacts and depth. Armed with this profound understanding, decision-makers gain tangible evidence for informed decision-making, strategically leveraging cultural strengths to cultivate an environment conducive to growth, innovation, and enduring success, ultimately leading to measurable outcomes.

Keywords: leadership, cooperation, collaboration, teamwork, work culture

Procedia PDF Downloads 47
18033 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 168
18032 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata

Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen

Abstract:

This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.

Keywords: composite, blending, optimization, lamination parameters

Procedia PDF Downloads 228
18031 A New Conjugate Gradient Method with Guaranteed Descent

Authors: B. Sellami, M. Belloufi

Abstract:

Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but also has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed.

Keywords: unconstrained optimization, conjugate gradient method, line search, global convergence

Procedia PDF Downloads 452
18030 Periodic Topology and Size Optimization Design of Tower Crane Boom

Authors: Wu Qinglong, Zhou Qicai, Xiong Xiaolei, Zhang Richeng

Abstract:

In order to achieve the layout and size optimization of the web members of tower crane boom, a truss topology and cross section size optimization method based on continuum is proposed considering three typical working conditions. Firstly, the optimization model is established by replacing web members with web plates. And the web plates are divided into several sub-domains so that periodic soft kill option (SKO) method can be carried out for topology optimization of the slender boom. After getting the optimized topology of web plates, the optimized layout of web members is formed through extracting the principal stress distribution. Finally, using the web member radius as design variable, the boom compliance as objective and the material volume of the boom as constraint, the cross section size optimization mathematical model is established. The size optimization criterion is deduced from the mathematical model by Lagrange multiplier method and Kuhn-Tucker condition. By comparing the original boom with the optimal boom, it is identified that this optimization method can effectively lighten the boom and improve its performance.

Keywords: tower crane boom, topology optimization, size optimization, periodic, SKO, optimization criterion

Procedia PDF Downloads 554
18029 Seismic Vulnerability of Structures Designed in Accordance with the Allowable Stress Design and Load Resistant Factor Design Methods

Authors: Mohammadreza Vafaei, Amirali Moradi, Sophia C. Alih

Abstract:

The method selected for the design of structures not only can affect their seismic vulnerability but also can affect their construction cost. For the design of steel structures, two distinct methods have been introduced by existing codes, namely allowable stress design (ASD) and load resistant factor design (LRFD). This study investigates the effect of using the aforementioned design methods on the seismic vulnerability and construction cost of steel structures. Specifically, a 20-story building equipped with special moment resisting frame and an eccentrically braced system was selected for this study. The building was designed for three different intensities of peak ground acceleration including 0.2 g, 0.25 g, and 0.3 g using the ASD and LRFD methods. The required sizes of beams, columns, and braces were obtained using response spectrum analysis. Then, the designed frames were subjected to nine natural earthquake records which were scaled to the designed response spectrum. For each frame, the base shear, story shears, and inter-story drifts were calculated and then were compared. Results indicated that the LRFD method led to a more economical design for the frames. In addition, the LRFD method resulted in lower base shears and larger inter-story drifts when compared with the ASD method. It was concluded that the application of the LRFD method not only reduced the weights of structural elements but also provided a higher safety margin against seismic actions when compared with the ASD method.

Keywords: allowable stress design, load resistant factor design, nonlinear time history analysis, seismic vulnerability, steel structures

Procedia PDF Downloads 270
18028 An Online Mastery Learning Method Based on a Dynamic Formative Evaluation

Authors: Jeongim Kang, Moon Hee Kim, Seong Baeg Kim

Abstract:

This paper proposes a novel e-learning model that is based on a dynamic formative evaluation. On evaluating the existing format of e-learning, conditions regarding repetitive learning to achieve mastery, causes issues for learners to lose tension and become neglectful of learning. The dynamic formative evaluation proposed is able to supplement limitation of the existing approaches. Since a repetitive learning method does not provide a perfect feedback, this paper puts an emphasis on the dynamic formative evaluation that is able to maximize learning achievement. Through the dynamic formative evaluation, the instructor is able to refer to the evaluation result when making estimation about the learner. To show the flow chart of learning, based on the dynamic formative evaluation, the model proves its effectiveness and validity.

Keywords: online learning, dynamic formative evaluation, mastery learning, repetitive learning method, learning achievement

Procedia PDF Downloads 511
18027 Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data

Authors: Salam Khalifa, Naveed Ahmed

Abstract:

We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.

Keywords: 3D video, 3D animation, RGB-D video, temporally coherent 3D animation

Procedia PDF Downloads 373
18026 The Estimation Method of Inter-Story Drift for Buildings Based on Evolutionary Learning

Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park

Abstract:

The seismic responses-based structural health monitoring system has been performed to reduce seismic damage. The inter-story drift ratio which is the major index of the seismic capacity assessment is employed for estimating the seismic damage of buildings. Meanwhile, seismic response analysis to estimate the structural responses of building demands significantly high computational cost due to increasing number of high-rise and large buildings. To estimate the inter-story drift ratio of buildings from the earthquake efficiently, this paper suggests the estimation method of inter-story drift for buildings using an artificial neural network (ANN). In the method, the radial basis function neural network (RBFNN) is integrated with optimization algorithm to optimize the variable through evolutionary learning that refers to evolutionary radial basis function neural network (ERBFNN). The estimation method estimates the inter-story drift without seismic response analysis when the new earthquakes are subjected to buildings. The effectiveness of the estimation method is verified through a simulation using multi-degree of freedom system.

Keywords: structural health monitoring, inter-story drift ratio, artificial neural network, radial basis function neural network, genetic algorithm

Procedia PDF Downloads 327
18025 Estimating 3D-Position of a Stationary Random Acoustic Source Using Bispectral Analysis of 4-Point Detected Signals

Authors: Katsumi Hirata

Abstract:

To develop the useful acoustic environmental recognition system, the method of estimating 3D-position of a stationary random acoustic source using bispectral analysis of 4-point detected signals is proposed. The method uses information about amplitude attenuation and propagation delay extracted from amplitude ratios and angles of auto- and cross-bispectra of the detected signals. It is expected that using bispectral analysis affects less influence of Gaussian noises than using conventional power spectral one. In this paper, the basic principle of the method is mentioned first, and its validity and features are considered from results of the fundamental experiments assumed ideal circumstances.

Keywords: 4-point detection, a stationary random acoustic source, auto- and cross-bispectra, estimation of 3D-position

Procedia PDF Downloads 359
18024 Mine Project Evaluations in the Rising of Uncertainty: Real Options Analysis

Authors: I. Inthanongsone, C. Drebenstedt, J. C. Bongaerts, P. Sontamino

Abstract:

The major concern in evaluating the value of mining projects related to the deficiency of the traditional discounted cash flow (DCF) method. This method does not take uncertainties into account and, hence it does not allow for an economic assessment of managerial flexibility and operational adaptability, which are increasingly determining long-term corporate success. Such an assessment can be performed with the real options valuation (ROV) approach, since it allows for a comparative evaluation of unforeseen uncertainties in a project life cycle. This paper presents an economic evaluation model for open pit mining projects based on real options valuation approach. Uncertainties in the model are caused by metal prices and cost uncertainties and the system dynamics (SD) modeling method is used to structure and solve the real options model. The model is applied to a case study. It can be shown that that managerial flexibility reacting to uncertainties may create additional value to a mining project in comparison to the outcomes of a DCF method. One important insight for management dealing with uncertainty is seen in choosing the optimal time to exercise strategic options.

Keywords: DCF methods, ROV approach, system dynamics modeling methods, uncertainty

Procedia PDF Downloads 501
18023 Optimization of Process Parameters by Using Taguchi Method for Bainitic Steel Machining

Authors: Vinay Patil, Swapnil Kekade, Ashish Supare, Vinayak Pawar, Shital Jadhav, Rajkumar Singh

Abstract:

In recent days, bainitic steel is used in automobile and non-automobile sectors due to its high strength. Bainitic steel is difficult to machine because of its high hardness, hence in this paper machinability of bainitic steel is studied by using Taguchi design of experiments (DOE) approach. Convectional turning experiments were done by using L16 orthogonal array for three input parameters viz. cutting speed, depth of cut and feed. The Taguchi method is applied to study the performance characteristics of machining parameters with surface roughness (Ra), cutting force and tool wear rate. By using Taguchi analysis, optimized process parameters for best surface finish and minimum cutting forces were analyzed.

Keywords: conventional turning, Taguchi method, S/N ratio, bainitic steel machining

Procedia PDF Downloads 331
18022 Comparative Study of Active Release Technique and Myofascial Release Technique in Patients with Upper Trapezius Spasm

Authors: Harihara Prakash Ramanathan, Daksha Mishra, Ankita Dhaduk

Abstract:

Relevance: This qualitative study will educate the clinician in putting into practice the advanced method of movement science in restoring the function. Purpose: The purpose of this study is to compare the effectiveness of Active Release Technique and myofascial release technique on range of motion, neck function and pain in patients with upper trapezius spasm. Methods/Analysis: The study was approved by the institutional Human Research and Ethics committee. This study included sixty patients of age group between 20 to 55 years with upper trapezius spasm. Patients were randomly divided into two groups receiving Active Release Technique (Group A) and Myofascial Release Technique (Group B). The patients were treated for 1 week and three outcome measures ROM, pain and functional level were measured using Goniometer, Visual analog scale(VAS), Neck disability Index Questionnaire(NDI) respectively. Paired Sample 't' test was used to compare the differences of pre and post intervention values of Cervical Range of motion, Neck disability Index, Visual analog scale of Group A and Group B. Independent't' test was used to compare the differences between two groups in terms of improvement in cervical range of motion, decrease in visual analogue scale(VAS), decrease in Neck disability index score. Results: Both the groups showed statistically significant improvements in cervical ROM, reduction in pain and in NDI scores. However, mean change in Cervical flexion, cervical extension, right side flexion, left side flexion, right side rotation, left side rotation, pain, neck disability level showed statistically significant improvement (P < 0. 05)) in the patients who received Active Release Technique as compared to Myofascial release technique. Discussion and conclusions: In present study, the average improvement immediately post intervention is significantly greater as compared to before treatment but there is even more improvement after seven sessions as compared to single session. Hence, this proves that several sessions of Manual techniques are necessary to produce clinically relevant results. Active release technique help to reduce the pain threshold by removing adhesion and promote normal tissue extensibility. The act of tensioning and compressing the affected tissue both with digital contact and through the active movement performed by the patient can be a plausible mechanism for tissue healing in this study. This study concluded that both Active Release Technique (ART) and Myofascial release technique (MFR) are equally effective in managing upper trapezius muscle spasm, but more improvement can be achieved by Active Release Technique (ART). Impact and Implications: Active Release Technique can be adopted as mainstay of treatment approach in treating trapezius spasm for faster relief and improving the functional status.

Keywords: trapezius spasm, myofascial release, active release technique, pain

Procedia PDF Downloads 273
18021 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC

Procedia PDF Downloads 306
18020 In and Out-Of-Sample Performance of Non Simmetric Models in International Price Differential Forecasting in a Commodity Country Framework

Authors: Nicola Rubino

Abstract:

This paper presents an analysis of a group of commodity exporting countries' nominal exchange rate movements in relationship to the US dollar. Using a series of Unrestricted Self-exciting Threshold Autoregressive models (SETAR), we model and evaluate sixteen national CPI price differentials relative to the US dollar CPI. Out-of-sample forecast accuracy is evaluated through calculation of mean absolute error measures on the basis of two-hundred and fifty-three months rolling window forecasts and extended to three additional models, namely a logistic smooth transition regression (LSTAR), an additive non linear autoregressive model (AAR) and a simple linear Neural Network model (NNET). Our preliminary results confirm presence of some form of TAR non linearity in the majority of the countries analyzed, with a relatively higher goodness of fit, with respect to the linear AR(1) benchmark, in five countries out of sixteen considered. Although no model appears to statistically prevail over the other, our final out-of-sample forecast exercise shows that SETAR models tend to have quite poor relative forecasting performance, especially when compared to alternative non-linear specifications. Finally, by analyzing the implied half-lives of the > coefficients, our results confirms the presence, in the spirit of arbitrage band adjustment, of band convergence with an inner unit root behaviour in five of the sixteen countries analyzed.

Keywords: transition regression model, real exchange rate, nonlinearities, price differentials, PPP, commodity points

Procedia PDF Downloads 278
18019 Attachment Systems and Psychotherapy: An Internal Secure Caregiver to Heal and Protect the Parts of Our Clients: InCorporer Method

Authors: Julien Baillet

Abstract:

In light of 30 years of scientific research, InCorporer Method was created in 2019 as a new approach to heal traumatic, developmental, and dissociative injuries. Following natural nervous system functions, InCorporer aims to heal, develop, and update the different defensive mammalian subsystems: fight, flight, freeze, feign death, cry for help, & energy regulator. The dimensions taken into account are: (i) Heal the traumatic injuries who are still bleeding, (ii) Develop the systems that never received the security, attention, and affection they needed. (iii) Update the parts that stayed stuck in the past, ignoring for too long that they are out of danger now. Through the Present Part and its caregiving skills, InCorporer method enables a balanced, soothed, and collaborative personality system. To be as integrative as possible, InCorporer method has been designed according to several fields of research, such as structural dissociation theory, attachment theory, and information processing theory. In this paper, the author presents how the internal caregiver is developed and trained to heal all the different parts/subsystems of our clients through mindful attention and reflex movement integration.

Keywords: PTSD, attachment, dissociation, part work

Procedia PDF Downloads 78
18018 Efficient Method for Inducing Embryos from Isolated Microspores of Durum Wheat

Authors: Zelikha Labbani

Abstract:

Durum wheat represents an attractive species to study androgenesis via isolated microspore culture in order to increase the efficiency of androgenic yield in recalcitrant species such as in induction embryogenesis. We describe here an efficient method for inducing embryos from isolated microspores of durum wheat. It is shown that this method, associated with cold alone or cold plus mannitol pretreatment, or mannitol alone of the spikes kept within their sheath leaves during different times, has significant positive effects on embryo production. The aim of this study was, therefore, to test the effect of mannitol 0,3M and cold pretreatment on the quality and quantity of embryos produced from microspore culture from wheat cultivars.

Keywords: in vitro embryogenesis, isolated microspores culture, durum wheat, pretreatments, mannitol 0.3m, cold pretreatment

Procedia PDF Downloads 57
18017 Hybrid Artificial Bee Colony and Least Squares Method for Rule-Based Systems Learning

Authors: Ahcene Habbi, Yassine Boudouaoui

Abstract:

This paper deals with the problem of automatic rule generation for fuzzy systems design. The proposed approach is based on hybrid artificial bee colony (ABC) optimization and weighted least squares (LS) method and aims to find the structure and parameters of fuzzy systems simultaneously. More precisely, two ABC based fuzzy modeling strategies are presented and compared. The first strategy uses global optimization to learn fuzzy models, the second one hybridizes ABC and weighted least squares estimate method. The performances of the proposed ABC and ABC-LS fuzzy modeling strategies are evaluated on complex modeling problems and compared to other advanced modeling methods.

Keywords: automatic design, learning, fuzzy rules, hybrid, swarm optimization

Procedia PDF Downloads 437
18016 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels

Authors: Tal Remez, Or Litany, Alex Bronstein

Abstract:

The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

Keywords: binary pixels, maximum likelihood, neural networks, sparse coding

Procedia PDF Downloads 201
18015 Rationalized Haar Transforms Approach to Design of Observer for Control Systems with Unknown Inputs

Authors: Joon-Hoon Park

Abstract:

The fundamental concept of observability is important in both theoretical and practical points of modern control systems. In modern control theory, a control system has criteria for determining the design solution exists for the system parameters and design objectives. The idea of observability relates to the condition of observing or estimating the state variables from the output variables that is generally measurable. To design closed-loop control system, the practical problems of implementing the feedback of the state variables must be considered and implementing state feedback control problem has been existed in this case. All the state variables are not available, so it is requisite to design and implement an observer that will estimate the state variables form the output parameters. However sometimes unknown inputs are presented in control systems as practical cases. This paper presents a design method and algorithm for observer of control system with unknown input parameters based on Rationalized Haar transform. The proposed method is more advantageous than the other numerical method.

Keywords: orthogonal functions, rationalized Haar transforms, control system observer, algebraic method

Procedia PDF Downloads 370
18014 System Identification in Presence of Outliers

Authors: Chao Yu, Qing-Guo Wang, Dan Zhang

Abstract:

The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.

Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising

Procedia PDF Downloads 307
18013 An Estimating Parameter of the Mean in Normal Distribution by Maximum Likelihood, Bayes, and Markov Chain Monte Carlo Methods

Authors: Autcha Araveeporn

Abstract:

This paper is to compare the parameter estimation of the mean in normal distribution by Maximum Likelihood (ML), Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML estimator is estimated by the average of data, the Bayes method is considered from the prior distribution to estimate Bayes estimator, and MCMC estimator is approximated by Gibbs sampling from posterior distribution. These methods are also to estimate a parameter then the hypothesis testing is used to check a robustness of the estimators. Data are simulated from normal distribution with the true parameter of mean 2, and variance 4, 9, and 16 when the sample sizes is set as 10, 20, 30, and 50. From the results, it can be seen that the estimation of MLE, and MCMC are perceivably different from the true parameter when the sample size is 10 and 20 with variance 16. Furthermore, the Bayes estimator is estimated from the prior distribution when mean is 1, and variance is 12 which showed the significant difference in mean with variance 9 at the sample size 10 and 20.

Keywords: Bayes method, Markov chain Monte Carlo method, maximum likelihood method, normal distribution

Procedia PDF Downloads 356
18012 Ice Load Measurements on Known Structures Using Image Processing Methods

Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.

Keywords: camera calibration, ice detection, ice load measurements, image processing

Procedia PDF Downloads 368
18011 Collaborative Implementation of Master Plans in Afghanistan's Context Considering Land Readjustment as Case Study

Authors: Ahmad Javid Habib, Tetsuo Kidokoro

Abstract:

There is an increasing demand for developing urban land to provide better living conditions for all citizens in Afghanistan. Most of the development will involve the acquisition of land. And the current land acquisition method practiced by central government is expropriation, which is a cash-based transaction method that imposes heavy fiscal burden on local municipalities and central government, and it does not protect ownership rights and social equity of landowners besides it relocates the urban poor to remote areas with limited access to jobs and public services. The questionnaire analysis, backed by observations of different case studies in countries where land readjustment is used as a collaborative land development tool indicates that the method plays a key role in valuing landowners’ rights, giving other community members and stakeholders the opportunity to collaboratively implement urban development projects. The practice of the method is reducing the heavy fiscal burden on the local and central governments and is a better option to deal with the current development challenges in Afghanistan.

Keywords: collaboration, land readjustment, master plan, expropriation

Procedia PDF Downloads 295
18010 Room Temperature Lasing from InGaAs Quantum Well Nanowires on Silicon-On-Insulator Substrates

Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li

Abstract:

Quantum confinement can be used to increase efficiency and control the emitted spectra in lasers and LEDs. In semiconductor nanowires, quantum confinement can be achieved in the axial direction by stacking multiple quantum disks or in the radial direction by forming a core-shell structure. In this work we demonstrate room temperature lasing in topological photonic crystal nanowire array lasers by using the InGaAs radial quantum well as the gain material. The nanowires with the GaAs/ InGaAs/ InGaP quantum well structure are arranged in a deformed honeycomb lattice, forming a photonic crystal surface emitting laser (PCSEL) . Under optical pumping we show that the PCSEL lase at the wavelength of 1001 nm (undeformed pattern) and 966 nm (stretched pattern), with the lasing threshold of 103 µJ〖/cm 〗^2. We compare the lasing wavelengths from devices with three different nanowire diameters for undeformed compressed and stretched devices, showing that the lasing wavelength increases as the nanowire diameter increases. The impact of deforming the honeycomb pattern is studied, where it was found out that the lasing wavelengths of undeformed devices are always larger than the corresponding stretched or compressed devices with the same nanowire diameter. Using photoluminescence results and numerical simulations on the field profile and the quality factors of the devices, we establish that the lasing of the device is from the radial quantum well structure.

Keywords: honeycomb PCSEL, nanowire laser, photonic crystal laser, quantum well laser

Procedia PDF Downloads 12