Search results for: fundamental models
5123 Reliability Modeling of Repairable Subsystems in Semiconductor Fabrication: A Virtual Age and General Repair Framework
Authors: Keshav Dubey, Swajeeth Panchangam, Arun Rajendran, Swarnim Gupta
Abstract:
In the semiconductor capital equipment industry, effective modeling of repairable system reliability is crucial for optimizing maintenance strategies and ensuring operational efficiency. However, repairable system reliability modeling using a renewal process is not as popular in the semiconductor equipment industry as it is in the locomotive and automotive industries. Utilization of this approach will help optimize maintenance practices. This paper presents a structured framework that leverages both parametric and non-parametric approaches to model the reliability of repairable subsystems based on operational data, maintenance schedules, and system-specific conditions. Data is organized at the equipment ID level, facilitating trend testing to uncover failure patterns and system degradation over time. For non-parametric modeling, the Mean Cumulative Function (Mean Cumulative Function) approach is applied, offering a flexible method to estimate the cumulative number of failures over time without assuming an underlying statistical distribution. This allows for empirical insights into subsystem failure behavior based on historical data. On the parametric side, virtual age modeling, along with Homogeneous and Non-Homogeneous Poisson Process (Homogeneous Poisson Process and Non-Homogeneous Poisson Process) models, is employed to quantify the effect of repairs and the aging process on subsystem reliability. These models allow for a more structured analysis by characterizing repair effectiveness and system wear-out trends over time. A comparison of various Generalized Renewal Process (GRP) approaches highlights their utility in modeling different repair effectiveness scenarios. These approaches provide a robust framework for assessing the impact of maintenance actions on system performance and reliability. By integrating both parametric and non-parametric methods, this framework offers a comprehensive toolset for reliability engineers to better understand equipment behavior, assess the effectiveness of maintenance activities, and make data-driven decisions that enhance system availability and operational performance in semiconductor fabrication facilities.Keywords: reliability, maintainability, homegenous poission process, repairable system
Procedia PDF Downloads 205122 Methodologies, Systems Development Life Cycle and Modeling Languages in Agile Software Development
Authors: I. D. Arroyo
Abstract:
This article seeks to integrate different concepts from contemporary software engineering with an agile development approach. We seek to clarify some definitions and uses, we make a difference between the Systems Development Life Cycle (SDLC) and the methodologies, we differentiate the types of frameworks such as methodological, philosophical and behavioral, standards and documentation. We define relationships based on the documentation of the development process through formal and ad hoc models, and we define the usefulness of using DevOps and Agile Modeling as integrative methodologies of principles and best practices.Keywords: methodologies, modeling languages, agile modeling, UML
Procedia PDF Downloads 1865121 Explaining the Steps of Designing and Calculating the Content Validity Ratio Index of the Screening Checklist of Preschool Students (5 to 7 Years Old) Exposed to Learning Difficulties
Authors: Sajed Yaghoubnezhad, Sedygheh Rezai
Abstract:
Background and Aim: Since currently in Iran, students with learning disabilities are identified after entering school, and with the approach to the gap between IQ and academic achievement, the purpose of this study is to design and calculate the content validity of the pre-school screening checklist (5-7) exposed to learning difficulties. Methods: This research is a fundamental study, and in terms of data collection method, it is quantitative research with a descriptive approach. In order to design this checklist, after reviewing the research background and theoretical foundations, cognitive abilities (visual processing, auditory processing, phonological awareness, executive functions, spatial visual working memory and fine motor skills) are considered the basic variables of school learning. The basic items and worksheets of the screening checklist of pre-school students 5 to 7 years old with learning difficulties were compiled based on the mentioned abilities and were provided to the specialists in order to calculate the content validity ratio index. Results: Based on the results of the table, the validity of the CVR index of the background information checklist is equal to 0.9, and the CVR index of the performance checklist of preschool children (5 to7 years) is equal to 0.78. In general, the CVR index of this checklist is reported to be 0.84. The results of this study provide good evidence for the validity of the pre-school sieve screening checklist (5-7) exposed to learning difficulties.Keywords: checklist, screening, preschoolers, learning difficulties
Procedia PDF Downloads 1025120 Algorithms Inspired from Human Behavior Applied to Optimization of a Complex Process
Authors: S. Curteanu, F. Leon, M. Gavrilescu, S. A. Floria
Abstract:
Optimization algorithms inspired from human behavior were applied in this approach, associated with neural networks models. The algorithms belong to human behaviors of learning and cooperation and human competitive behavior classes. For the first class, the main strategies include: random learning, individual learning, and social learning, and the selected algorithms are: simplified human learning optimization (SHLO), social learning optimization (SLO), and teaching-learning based optimization (TLBO). For the second class, the concept of learning is associated with competitiveness, and the selected algorithms are sports-inspired algorithms (with Football Game Algorithm, FGA and Volleyball Premier League, VPL) and Imperialist Competitive Algorithm (ICA). A real process, the synthesis of polyacrylamide-based multicomponent hydrogels, where some parameters are difficult to obtain experimentally, is considered as a case study. Reaction yield and swelling degree are predicted as a function of reaction conditions (acrylamide concentration, initiator concentration, crosslinking agent concentration, temperature, reaction time, and amount of inclusion polymer, which could be starch, poly(vinyl alcohol) or gelatin). The experimental results contain 175 data. Artificial neural networks are obtained in optimal form with biologically inspired algorithm; the optimization being perform at two level: structural and parametric. Feedforward neural networks with one or two hidden layers and no more than 25 neurons in intermediate layers were obtained with values of correlation coefficient in the validation phase over 0.90. The best results were obtained with TLBO algorithm, correlation coefficient being 0.94 for an MLP(6:9:20:2) – a feedforward neural network with two hidden layers and 9 and 20, respectively, intermediate neurons. Good results obtained prove the efficiency of the optimization algorithms. More than the good results, what is important in this approach is the simulation methodology, including neural networks and optimization biologically inspired algorithms, which provide satisfactory results. In addition, the methodology developed in this approach is general and has flexibility so that it can be easily adapted to other processes in association with different types of models.Keywords: artificial neural networks, human behaviors of learning and cooperation, human competitive behavior, optimization algorithms
Procedia PDF Downloads 1085119 Loving and Letting Go: Bounded Attachment in Creative Work
Authors: Greg Fetzer
Abstract:
One of the fundamental tensions of creative work is between the need to be passionate and persistent in advancing novel and risky ideas and the need to be flexible, revising, or even abandoning ideas in favor of others. The tension becomes fraught in part because of the attachment that creators have toward their ideas. Idea attachment is defined here as a multifaceted concept referring to affection, passion, and connection toward a target—in this case, one’s projects or ideas. Yet feeling attached can make creators resistant to feedback, making them less flexible and leading them to escalate commitment. Despite a growing understanding of how attachment develops and evolves in response to project changes, feedback, and creative jolts, we still know relatively little about the organizational dynamics that may shape idea attachment. Through a qualitative, inductive study of early-stage R&D scientists in the pharmaceutical industry, this research finds that scientists develop bounded attachment, a mindset that limits emotional attachment to ideas while still fostering engagement in idea development. This research develops a process model of how bounded attachment is developed and enacted across three stages of the creative process, idea generation, idea evaluation, and outcome assessment, as well as the role that organizational practices and professional identity play in shaping this process: these collective practices provided structures to ensure ideas were evaluated in a rational (i.e. non-emotional way) while also providing socioemotional support in the face of setbacks. Together, this process led to continued creative engagement across ideas in a portfolio and helped scientists construct a sense of meaningful work despite a high likelihood (and frequency) of failure.Keywords: creativity, innovation, organizational practices, qualitative, attachment
Procedia PDF Downloads 605118 Why Do We Need Hierachical Linear Models?
Authors: Mustafa Aydın, Ali Murat Sunbul
Abstract:
Hierarchical or nested data structures usually are seen in many research areas. Especially, in the field of education, if we examine most of the studies, we can see the nested structures. Students in classes, classes in schools, schools in cities and cities in regions are similar nested structures. In a hierarchical structure, students being in the same class, sharing the same physical conditions and similar experiences and learning from the same teachers, they demonstrate similar behaviors between them rather than the students in other classes.Keywords: hierarchical linear modeling, nested data, hierarchical structure, data structure
Procedia PDF Downloads 6525117 Multilevel Modelling of Modern Contraceptive Use in Nigeria: Analysis of the 2013 NDHS
Authors: Akiode Ayobami, Akiode Akinsewa, Odeku Mojisola, Salako Busola, Odutolu Omobola, Nuhu Khadija
Abstract:
Purpose: Evidence exists that family planning use can contribute to reduction in infant and maternal mortality in any country. Despite these benefits, contraceptive use in Nigeria still remains very low, only 10% among married women. Understanding factors that predict contraceptive use is very important in order to improve the situation. In this paper, we analysed data from the 2013 Nigerian Demographic and Health Survey (NDHS) to better understand predictors of contraceptive use in Nigeria. The use of logistics regression and other traditional models in this type of situation is not appropriate as they do not account for social structure influence brought about by the hierarchical nature of the data on response variable. We therefore used multilevel modelling to explore the determinants of contraceptive use in order to account for the significant variation in modern contraceptive use by socio-demographic, and other proximate variables across the different Nigerian states. Method: This data has a two-level hierarchical structure. We considered the data of 26, 403 married women of reproductive age at level 1 and nested them within the 36 states and the Federal Capital Territory, Abuja at level 2. We modelled use of modern contraceptive against demographic variables, being told about FP at health facility, heard of FP on TV, Magazine or radio, husband desire for more children nested within the state. Results: Our results showed that the independent variables in the model were significant predictors of modern contraceptive use. The estimated variance component for the null model, random intercept, and random slope models were significant (p=0.00), indicating that the variation in contraceptive use across the Nigerian states is significant, and needs to be accounted for in order to accurately determine the predictors of contraceptive use, hence the data is best fitted by the multilevel model. Only being told about family planning at the health facility and religion have a significant random effect, implying that their predictability of contraceptive use varies across the states. Conclusion and Recommendation: Results showed that providing FP information at the health facility and religion needs to be considered when programming to improve contraceptive use at the state levels.Keywords: multilevel modelling, family planning, predictors, Nigeria
Procedia PDF Downloads 4195116 Performance Based Design of Masonry Infilled Reinforced Concrete Frames for Near-Field Earthquakes Using Energy Methods
Authors: Alok Madan, Arshad K. Hashmi
Abstract:
Performance based design (PBD) is an iterative exercise in which a preliminary trial design of the building structure is selected and if the selected trial design of the building structure does not conform to the desired performance objective, the trial design is revised. In this context, development of a fundamental approach for performance based seismic design of masonry infilled frames with minimum number of trials is an important objective. The paper presents a plastic design procedure based on the energy balance concept for PBD of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames subjected to near-field earthquakes. The proposed energy based plastic design procedure was implemented for trial performance based seismic design of representative masonry infilled reinforced concrete frames with various practically relevant distributions of masonry infill panels over the frame elevation. Non-linear dynamic analyses of the trial PBD of masonry infilled R/C frames was performed under the action of near-field earthquake ground motions. The results of non-linear dynamic analyses demonstrate that the proposed energy method is effective for performance based design of masonry infilled R/C frames under near-field as well as far-field earthquakes.Keywords: masonry infilled frame, energy methods, near-fault ground motions, pushover analysis, nonlinear dynamic analysis, seismic demand
Procedia PDF Downloads 2925115 Brainwave Classification for Brain Balancing Index (BBI) via 3D EEG Model Using k-NN Technique
Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan
Abstract:
In this paper, the comparison between k-Nearest Neighbor (kNN) algorithms for classifying the 3D EEG model in brain balancing is presented. The EEG signal recording was conducted on 51 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, maximum PSD values were extracted as features from the model. There are three indexes for the balanced brain; index 3, index 4 and index 5. There are significant different of the EEG signals due to the brain balancing index (BBI). Alpha-α (8–13 Hz) and beta-β (13–30 Hz) were used as input signals for the classification model. The k-NN classification result is 88.46% accuracy. These results proved that k-NN can be used in order to predict the brain balancing application.Keywords: power spectral density, 3D EEG model, brain balancing, kNN
Procedia PDF Downloads 4875114 Modelling the Physicochemical Properties of Papaya Based-Cookies Using Response Surface Methodology
Authors: Mayowa Saheed Sanusi A, Musiliu Olushola Sunmonua, Abdulquadri Alakab Owolabi Raheema, Adeyemi Ikimot Adejokea
Abstract:
The development of healthy cookies for health-conscious consumers cannot be overemphasized in the present global health crisis. This study was aimed to evaluate and model the influence of ripeness levels of papaya puree (unripe, ripe and overripe), oven temperature (130°C, 150°C and 170°C) and oven rack speed (stationary, 10 and 20 rpm) on physicochemical properties of papaya-based cookies using Response Surface Methodology (RSM). The physicochemical properties (baking time, cookies mass, cookies thickness, spread ratio, proximate composition, Calcium, Vitamin C and Total Phenolic Content) were determined using standard procedures. The data obtained were statistically analysed at p≤0.05 using ANOVA. The polynomial regression model of response surface methodology was used to model the physicochemical properties. The adequacy of the models was determined using the coefficient of determination (R²) and the response optimizer of RSM was used to determine the optimum physicochemical properties for the papaya-based cookies. Cookies produced from overripe papaya puree were observed to have the shortest baking time; ripe papaya puree favors cookies spread ratio, while the unripe papaya puree gives cookies with the highest mass and thickness. The highest crude protein content, fiber content, calcium content, Vitamin C and Total Phenolic Content (TPC) were observed in papaya based-cookies produced from overripe puree. The models for baking time, cookies mass, cookies thickness, spread ratio, moisture content, crude protein and TPC were significant, with R2 ranging from 0.73 – 0.95. The optimum condition for producing papaya based-cookies with desirable physicochemical properties was obtained at 149°C oven temperature, 17 rpm oven rack speed and with the use of overripe papaya puree. The Information on the use of puree from unripe, ripe and overripe papaya can help to increase the use of underutilized unripe or overripe papaya and also serve as a strategic means of obtaining a fat substitute to produce new products with lower production cost and health benefit.Keywords: papaya based-cookies, modeling, response surface methodology, physicochemical properties
Procedia PDF Downloads 1675113 The Volume–Volatility Relationship Conditional to Market Efficiency
Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese
Abstract:
The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent
Procedia PDF Downloads 785112 Decoding WallStreetBets: The Impact of Daily Disagreements on Trading Volumes
Authors: F. Ghandehari, H. Lu, L. El-Jahel, D. Jayasuriya
Abstract:
Disagreement among investors is a fundamental aspect of financial markets, significantly influencing market dynamics. Measuring this disagreement has traditionally posed challenges, often relying on proxies like analyst forecast dispersion, which are limited by biases and infrequent updates. Recent movements in social media indicate that retail investors actively seek financial advice online and can influence the stock market. The evolution of the investing landscape, particularly the rise of social media as a hub for financial advice, provides an alternative avenue for real-time measurement of investor sentiment and disagreement. Platforms like Reddit offer rich, community-driven discussions that reflect genuine investor opinions. This research explores how social media empowers retail investors and the potential of leveraging textual analysis of social media content to capture daily fluctuations in investor disagreement. This study investigates the relationship between daily investor disagreement and trading volume, focusing on the role of social media platforms in shaping market dynamics, specifically using data from WallStreetBets (WSB) on Reddit. This paper uses data from 2020 to 2023 from WSB and analyses 4,896 firms with enough social media activity in WSB to define stock-day level disagreement measures. Consistent with traditional theories that disagreement induces trading volume, the results show significant evidence supporting this claim through different disagreement measures derived from WSB discussions.Keywords: disagreement, retail investor, social finance, social media
Procedia PDF Downloads 405111 The Effect of Substrate Temperature on the Structural, Optical, and Electrical of Nano-Crystalline Tin Doped-Cadmium Telluride Thin Films for Photovoltaic Applications
Authors: Eman A. Alghamdi, A. M. Aldhafiri
Abstract:
It was found that the induce an isolated dopant close to the middle of the bandgap by occupying the Cd position in the CdTe lattice structure is an efficient factor in reducing the nonradiative recombination rate and increasing the solar efficiency. According to our laboratory results, this work has been carried out to obtain the effect of substrate temperature on the CdTe0.6Sn0.4 prepared by thermal evaporation technique for photovoltaic application. Various substrate temperature (25°C, 100°C, 150°C, 200°C, 250°C and 300°C) was applied. Sn-doped CdTe thin films on a glass substrate at a different substrate temperature were made using CdTe and SnTe powders by the thermal evaporation technique. The structural properties of the prepared samples were determined using Raman, x-Ray Diffraction. Spectroscopic ellipsometry and spectrophotometric measurements were conducted to extract the optical constants as a function of substrate temperature. The structural properties of the grown films show hexagonal and cubic mixed structures and phase change has been reported. Scanning electron microscopy (SEM) reviled that a homogenous with a bigger grain size was obtained at 250°C substrate temperature. The conductivity measurements were recorded as a function of substrate temperatures. The open-circuit voltage was improved by controlling the substrate temperature due to the improvement of the fundamental material issues such as recombination and low carrier concentration. All the result was explained and discussed on the biases of the influences of the Sn dopant and the substrate temperature on the structural, optical and photovoltaic characteristics.Keywords: CdTe, conductivity, photovoltaic, ellipsometry
Procedia PDF Downloads 1335110 The Current Development and Legislation on the Acquisition and Use of Nuclear Energy in Contemporary International Law
Authors: Uche A. Nnawulezi
Abstract:
Over the past decades, the acquisition and utilization of nuclear energy have remained a standout amongst the most intractable issues which past world leaders have unsuccessfully endeavored to grapple with. This study analyzes the present advancement and enactment on the acquisition and utilization of nuclear energy in contemporary international law. It seeks to address international co-operations in the field of nuclear energy by looking at what nuclear energy is all about and how it came into being. It also seeks to address concerns expressed by a few researchers on the position of nuclear law in the most extensive domain of the law by looking at the authoritative procedure for nuclear law, system of arrangements and traditions. This study also agrees in favour of treaty on non-proliferation of nuclear weapons based on human right and humanitarian principles that are not duly moral, but also legal ones. Specifically, the past development activities on nuclear weapon and the practical system of the nuclear energy institute will be inspected. The study noted among others, former president Obama's remark on nuclear energy and Pakistan nuclear policies and its attendant outcomes. Essentially, we depended on documentary evidence and henceforth scooped a great part of the data from secondary sources. The study emphatically advocates for the adoption of absolute liability principles and setting up of a viability trust fund, all of which will help in sustaining global peace where global best practices in acquisition and use of nuclear energy will be widely accepted in the contemporary international law. Essentially, the fundamental proposals made in this paper if completely adopted, might go far in fortifying the present advancement and enactment on the application and utilization of nuclear energy and accordingly, addressing a portion of the intractable issues under international law.Keywords: nuclear energy, international law, acquisition, development
Procedia PDF Downloads 1785109 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation
Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke
Abstract:
Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform
Procedia PDF Downloads 3095108 Towards Automatic Calibration of In-Line Machine Processes
Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales
Abstract:
In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820Keywords: data model, machine learning, industrial winding, calibration
Procedia PDF Downloads 2415107 Characterizing the Rectification Process for Designing Scoliosis Braces: Towards Digital Brace Design
Authors: Inigo Sanz-Pena, Shanika Arachchi, Dilani Dhammika, Sanjaya Mallikarachchi, Jeewantha S. Bandula, Alison H. McGregor, Nicolas Newell
Abstract:
The use of orthotic braces for adolescent idiopathic scoliosis (AIS) patients is the most common non-surgical treatment to prevent deformity progression. The traditional method to create an orthotic brace involves casting the patient’s torso to obtain a representative geometry, which is then rectified by an orthotist to the desired geometry of the brace. Recent improvements in 3D scanning technologies, rectification software, CNC, and additive manufacturing processes have given the possibility to compliment, or in some cases, replace manual methods with digital approaches. However, the rectification process remains dependent on the orthotist’s skills. Therefore, the rectification process needs to be carefully characterized to ensure that braces designed through a digital workflow are as efficient as those created using a manual process. The aim of this study is to compare 3D scans of patients with AIS against 3D scans of both pre- and post-rectified casts that have been manually shaped by an orthotist. Six AIS patients were recruited from the Ragama Rehabilitation Clinic, Colombo, Sri Lanka. All patients were between 10 and 15 years old, were skeletally immature (Risser grade 0-3), and had Cobb angles between 20-45°. Seven spherical markers were placed at key anatomical locations on each patient’s torso and on the pre- and post-rectified molds so that distances could be reliably measured. 3D scans were obtained of 1) the patient’s torso and pelvis, 2) the patient’s pre-rectification plaster mold, and 3) the patient’s post-rectification plaster mold using a Structure Sensor Mark II 3D scanner (Occipital Inc., USA). 3D stick body models were created for each scan to represent the distances between anatomical landmarks. The 3D stick models were used to analyze the changes in position and orientation of the anatomical landmarks between scans using Blender open-source software. 3D Surface deviation maps represented volume differences between the scans using CloudCompare open-source software. The 3D stick body models showed changes in the position and orientation of thorax anatomical landmarks between the patient and the post-rectification scans for all patients. Anatomical landmark position and volume differences were seen between 3D scans of the patient’s torsos and the pre-rectified molds. Between the pre- and post-rectified molds, material removal was consistently seen on the anterior side of the thorax and the lateral areas below the ribcage. Volume differences were seen in areas where the orthotist planned to place pressure pads (usually at the trochanter on the side to which the lumbar curve was tilted (trochanter pad), at the lumbar apical vertebra (lumbar pad), on the rib connected to the apical vertebrae at the mid-axillary line (thoracic pad), and on the ribs corresponding to the upper thoracic vertebra (axillary extension pad)). The rectification process requires the skill and experience of an orthotist; however, this study demonstrates that the brace shape, location, and volume of material removed from the pre-rectification mold can be characterized and quantified. Results from this study can be fed into software that can accelerate the brace design process and make steps towards the automated digital rectification process.Keywords: additive manufacturing, orthotics, scoliosis brace design, sculpting software, spinal deformity
Procedia PDF Downloads 1455106 A Two-Step Framework for Unsupervised Speaker Segmentation Using BIC and Artificial Neural Network
Authors: Ahmad Alwosheel, Ahmed Alqaraawi
Abstract:
This work proposes a new speaker segmentation approach for two speakers. It is an online approach that does not require a prior information about speaker models. It has two phases, a conventional approach such as unsupervised BIC-based is utilized in the first phase to detect speaker changes and train a Neural Network, while in the second phase, the output trained parameters from the Neural Network are used to predict next incoming audio stream. Using this approach, a comparable accuracy to similar BIC-based approaches is achieved with a significant improvement in terms of computation time.Keywords: artificial neural network, diarization, speaker indexing, speaker segmentation
Procedia PDF Downloads 5025105 Pressure Gradient Prediction of Oil-Water Two Phase Flow through Horizontal Pipe
Authors: Ahmed I. Raheem
Abstract:
In this thesis, stratified and stratified wavy flow regimes have been investigated numerically for the oil (1.57 mPa s viscosity and 780 kg/m3 density) and water twophase flow in small and large horizontal steel pipes with a diameter between 0.0254 to 0.508 m by ANSYS Fluent software. Volume of fluid (VOF) with two phases flows using two equations family models (Realizable k-Keywords: CFD, two-phase flow, pressure gradient, volume of fluid, large diameter, horizontal pipe, oil-water stratified and stratified wavy flow
Procedia PDF Downloads 4335104 The Difference in Basic Skills among Different Positional Players in Football
Authors: Habib Sk, Ashoke Kumar Biswas
Abstract:
Football is a team game. Eleven players of each team are arranged in different positions of play to serve the specific task during a game situation. Some such basic positions in a soccer game are (i) goal keepers (ii) defenders (iii) midfielders and (iv) forwards. Irrespective of the position, it is required for all football players to learn and get skilled in basic soccer skills like passing, receiving, heading, throwing, dribbling, etc. The purpose of the study was to find out the difference in these basic soccer skills among positional players in football if any. A total of thirty-nine (39) teen aged football players between 13 to 19 years were selected from Hooghly district in West Bengal, India, as subjects. Out of them, there were seven (7) goal keepers, twelve (12) defenders, thirteen (13) midfielders, and seven (7) forwards. Passing, dribbling, tackling, heading, and receiving were the selected basic soccer skills. The performance of the subjects of different positional groups in different selected soccer skills was tested using a standard test for each. On the basis of results obtained through statistical analysis of data, following results were obtained: i) there was significant difference among the groups in passing, dribbling and heading but not in receiving; ii) the goal keepers and defenders were the weakest in all selected soccer skills; iii) midfielders were found better in receiving than other three skills of passing, dribbling and heading; and iv) the forward group of players was found to be the better in passing, dribbling and heading but weakest in receiving than other groups.Keywords: performance, difference, skill, fundamental, soccer, position
Procedia PDF Downloads 1465103 Mechanical Characterization of Banana by Inverse Analysis Method Combined with Indentation Test
Authors: Juan F. P. Ramírez, Jésica A. L. Isaza, Benjamín A. Rojano
Abstract:
This study proposes a novel use of a method to determine the mechanical properties of fruits by the use of the indentation tests. The method combines experimental results with a numerical finite elements model. The results presented correspond to a simplified numerical modeling of banana. The banana was assumed as one-layer material with an isotropic linear elastic mechanical behavior, the Young’s modulus found is 0.3Mpa. The method will be extended to multilayer models in further studies.Keywords: finite element method, fruits, inverse analysis, mechanical properties
Procedia PDF Downloads 3585102 Presentation of Transgender identities
Authors: Tony Chapman-Wilson
Abstract:
Applied theatre is not an ultimate vehicle to create social change; but is more an opportunity of hope that the production material might affect this. Theatre-makers are able to deconstruct socially and politically challenging themes to encourage their audience to witness lived experiences as they consider themes of concern and injustice. This allows writers to (re) present the lived experiences of trans people, and for social injustice, continued transphobia, and lack of equity to be presented to an audience for debate. There needs to be a stronger position and presence of trans voices and active participation presented of these rather than just that of the cisgender-lens and standpoint. This research examines the relationship between human rights and theatre and considers global examples of this practice, as well as exploring the negatives formed from this relationship, and how this may be developed in the future. This focusses on the ability of theatre to denounce the violations of human rights and considers the power of theatre to support the importance of the awareness of violations to human rights for the raised awareness and potential for action of the audience – who may themselves be part of the oppressed, or indeed an oppressor. The fundamental assertion here is not one of evidenced social change, but of awareness raising of the audience and the potential for social activism and action. The practice of applied theatre is one that is experienced by the audience and the project participants alike, with the intention that theatre may consider how people interact with one another. This paper examines the opportunity of verbatim theatre techniques to allow for a cis-led trans-collaborative research project to (re) present intergenerational trans identities.Keywords: applied theatre, verbatim, transgender, social justice
Procedia PDF Downloads 425101 Development of Latent Fingerprints on Non-Porous Surfaces Recovered from Fresh and Sea Water
Authors: A. Somaya Madkour, B. Abeer sheta, C. Fatma Badr El Dine, D. Yasser Elwakeel, E. Nermine AbdAllah
Abstract:
Criminal offenders have a fundamental goal not to leave any traces at the crime scene. Some may suppose that items recovered underwater will have no forensic value, therefore, they try to destroy the traces by throwing items in water. These traces are subjected to the destructive environmental effects. This can represent a challenge for Forensic experts investigating finger marks. Accordingly, the present study was conducted to determine the optimal method for latent fingerprints development on non-porous surfaces submerged in aquatic environments at different time interval. The two factors analyzed in this study were the nature of aquatic environment and length of submerged time. In addition, the quality of developed finger marks depending on the used method was also assessed. Therefore, latent fingerprints were deposited on metallic, plastic and glass objects and submerged in fresh or sea water for one, two, and ten days. After recovery, the items were subjected to cyanoacrylate fuming, black powder and small particle reagent processing and the prints were examined. Each print was evaluated according to fingerprint quality assessment scale. The present study demonstrated that the duration of submersion affects the quality of finger marks; the longer the duration, the worse the quality.The best results of visualization were achieved using cyanoacrylate either in fresh or sea water. This study has also revealed that the exposure to sea water had more destructive influence on the quality of detected finger marks.Keywords: fingerprints, fresh water, sea, non-porous
Procedia PDF Downloads 4555100 DNA Barcoding Application in Study of Icthyo- Biodiversity in Rivers of Pakistan
Authors: Asma Karim
Abstract:
Fish taxonomy plays a fundamental role in the study of biodiversity. However, traditional methods of fish taxonomy rely on morphological features, which can lead to confusion due to great similarities between closely related species. To overcome this limitation, modern taxonomy employs DNA barcoding as a species identification method. This involves using a short standardized mitochondrial DNA region as a barcode, specifically a 658 base pair fragment near the 5′ ends of the mitochondrial cytochrome c oxidase subunit 1 (CO1) gene, to exploit the diversity in this region for identification of species. To test the effectiveness and reliability of DNA barcoding, 25 fish specimens from nine different fish species found in various rivers of Pakistan were identified morphologically using a dichotomous key at the start of the study. Comprising nine freshwater fish species, including Mystus cavasius, Mystus bleekeri, Osteobrama cotio, Labeo rohita, Labeo culbasu, Labeo gonius, Cyprinus carpio, Catla catla and Cirrhinus mrigala from different rivers of Pakistan were used in the present study. DNA was extracted from one of the pectoral fins and a partial sequence of CO1 gene was amplified using the conventional PCR method. Analysis of the barcodes confirmed that genetically identified fishes were the same as those identified morphologically at the beginning of the study. The sequences were also analyzed for biodiversity and phylogenetic studies. Based on the results of the study, it can be concluded that DNA barcoding is an effective and reliable method for studying biodiversity and conducting phylogenetic analysis of different fish species in Pakistan.Keywords: DNA barcoding, fresh water fishes, taxonomy, biodiversity, Pakistan
Procedia PDF Downloads 1085099 Molecular Insights into the 5α-Reductase Inhibitors: Quantitative Structure Activity Relationship, Pre-Absorption, Distribution, Metabolism, and Excretion and Docking Studies
Authors: Richa Dhingra, Monika, Manav Malhotra, Tilak Raj Bhardwaj, Neelima Dhingra
Abstract:
5-Alpha-reductases (5AR), a membrane bound, NADPH dependent enzyme and convert male hormone testosterone (T) into more potent androgen dihydrotestosterone (DHT). DHT is the required for the development and function of male sex organs, but its overproduction has been found to be associated with physiological conditions like Benign Prostatic Hyperplasia (BPH). Thus the inhibition of 5ARs could be a key target for the treatment of BPH. In present study, 2D and 3D Quantitative Structure Activity Relationship (QSAR) pharmacophore models have been generated for 5AR based on known inhibitory concentration (IC₅₀) values with extensive validations. The four featured 2D pharmacophore based PLS model correlated the topological interactions (–OH group connected with one single bond) (SsOHE-index); semi-empirical (Quadrupole2) and physicochemical descriptors (Mol. wt, Bromines Count, Chlorines Count) with 5AR inhibitory activity, and has the highest correlation coefficient (r² = 0.98, q² =0.84; F = 57.87, pred r² = 0.88). Internal and external validation was carried out using test and proposed set of compounds. The contribution plot of electrostatic field effects and steric interactions generated by 3D-QSAR showed interesting results in terms of internal and external predictability. The well validated 2D Partial Least Squares (PLS) and 3D k-nearest neighbour (kNN) models were used to search novel 5AR inhibitors with different chemical scaffold. To gain more insights into the molecular mechanism of action of these steroidal derivatives, molecular docking and in silico absorption, distribution, metabolism, and excretion (ADME) studies were also performed. Studies have revealed the hydrophobic and hydrogen bonding of the ligand with residues Alanine (ALA) 63A, Threonine (THR) 60A, and Arginine (ARG) 456A of 4AT0 protein at the hinge region. The results of QSAR, molecular docking, in silico ADME studies provide guideline and mechanistic scope for the identification of more potent 5-Alpha-reductase inhibitors (5ARI).Keywords: 5α-reductase inhibitor, benign prostatic hyperplasia, ligands, molecular docking, QSAR
Procedia PDF Downloads 1635098 Model Order Reduction of Complex Airframes Using Component Mode Synthesis for Dynamic Aeroelasticity Load Analysis
Authors: Paul V. Thomas, Mostafa S. A. Elsayed, Denis Walch
Abstract:
Airframe structural optimization at different design stages results in new mass and stiffness distributions which modify the critical design loads envelop. Determination of aircraft critical loads is an extensive analysis procedure which involves simulating the aircraft at thousands of load cases as defined in the certification requirements. It is computationally prohibitive to use a Global Finite Element Model (GFEM) for the load analysis, hence reduced order structural models are required which closely represent the dynamic characteristics of the GFEM. This paper presents the implementation of Component Mode Synthesis (CMS) method for the generation of high fidelity Reduced Order Model (ROM) of complex airframes. Here, sub-structuring technique is used to divide the complex higher order airframe dynamical system into a set of subsystems. Each subsystem is reduced to fewer degrees of freedom using matrix projection onto a carefully chosen reduced order basis subspace. The reduced structural matrices are assembled for all the subsystems through interface coupling and the dynamic response of the total system is solved. The CMS method is employed to develop the ROM of a Bombardier Aerospace business jet which is coupled with an aerodynamic model for dynamic aeroelasticity loads analysis under gust turbulence. Another set of dynamic aeroelastic loads is also generated employing a stick model of the same aircraft. Stick model is the reduced order modelling methodology commonly used in the aerospace industry based on stiffness generation by unitary loading application. The extracted aeroelastic loads from both models are compared against those generated employing the GFEM. Critical loads Modal participation factors and modal characteristics of the different ROMs are investigated and compared against those of the GFEM. Results obtained show that the ROM generated using Craig Bampton CMS reduction process has a superior dynamic characteristics compared to the stick model.Keywords: component mode synthesis, craig bampton reduction method, dynamic aeroelasticity analysis, model order reduction
Procedia PDF Downloads 2095097 The Impact of Human Rights Legislations and Evolution
Authors: Emad Eid Nemr Danyal
Abstract:
the problem of respect for human rights in Southeast Asia has come to be a prime problem and is attracting the attention of the worldwide community. basically, the affiliation of Southeast Asian international locations (ASEAN) made human rights one in every of its fundamental problems and in the ASEAN constitution in 2008. in the end, the Intergovernmental fee on Human Rights ASEAN Human Rights (AICHR) changed into mounted. AICHR is the Southeast Asia Human Rights Enforcement fee charged with the responsibilities, capabilities and powers to sell and defend human rights. but, at the quit of 2016, the protective function assigned to the AICHR turned into no longer but fulfilled. that is shown through numerous cases of human rights violations which are still ongoing and have now not but been solved. One case that has lately come to mild is human rights violations in opposition to the Rohingya human beings in Myanmar. the use of a prison-normative technique, the take a look at examines the urgency of setting up a human rights tribunal in Southeast Asia able to making a decision binding on ASEAN members or responsible events. facts suggests ASEAN desires regional courts to cope with human rights abuses inside the ASEAN area. in addition, the observe additionally highlights 3 essential elements that ASEAN must recall when setting up a human rights tribunal, specifically: quantity. a full-size distinction in terms of democracy and human rights development most of the individuals, a regular implementation of the precept of non-interference and the economic difficulty of the continuation of the court.Keywords: sustainable development, human rights, the right to development, the human rights-based approach to development, environmental rights, economic development, social sustainability human rights protection, human rights violations, workers’ rights, justice, security
Procedia PDF Downloads 145096 Gender Differences in Negotiation: Considering the Usual Driving Forces
Authors: Claude Alavoine, Ferkan Kaplanseren
Abstract:
Negotiation is a specific form of interaction based on communication in which the parties enter into deliberately, each with clear but different interests or goals and a mutual dependency towards a decision due to be taken at the end of the confrontation. Consequently, negotiation is a complex activity involving many different disciplines from the strategic aspects and the decision making process to the evaluation of alternatives or outcomes and the exchange of information. While gender differences can be considered as one of the most researched topic within negotiation studies, empirical works and theory present many conflicting evidences and results about the role of gender in the process or the outcome. Furthermore, little interest has been shown over gender differences in the definition of what is negotiation, its essence or fundamental elements. Or, as differences exist in practices, it might be essential to study if the starting point of these discrepancies does not come from different considerations about what is negotiation and what will encourage the participants in their strategic decisions. Some recent and promising experiments made with diverse groups show that male and female participants in a common and shared situation barely consider the same way the concepts of power, trust or stakes which are largely considered as the usual driving forces of any negotiation. Furthermore, results from Human Resource self-assessment tests display and confirm considerable differences between individuals regarding essential behavioral dimensions like capacity to improvise and to achieve, aptitude to conciliate or to compete and orientation towards power and group domination which are also part of negotiation skills. Our intention in this paper is to confront these dimensions with negotiation’s usual driving forces in order to build up new paths for further research.Keywords: negotiation, gender, trust, power, stakes, strategies
Procedia PDF Downloads 5095095 Flood Risk Management in the Semi-Arid Regions of Lebanon - Case Study “Semi Arid Catchments, Ras Baalbeck and Fekha”
Authors: Essam Gooda, Chadi Abdallah, Hamdi Seif, Safaa Baydoun, Rouya Hdeib, Hilal Obeid
Abstract:
Floods are common natural disaster occurring in semi-arid regions in Lebanon. This results in damage to human life and deterioration of environment. Despite their destructive nature and their immense impact on the socio-economy of the region, flash floods have not received adequate attention from policy and decision makers. This is mainly because of poor understanding of the processes involved and measures needed to manage the problem. The current understanding of flash floods remains at the level of general concepts; most policy makers have yet to recognize that flash floods are distinctly different from normal riverine floods in term of causes, propagation, intensity, impacts, predictability, and management. Flash floods are generally not investigated as a separate class of event but are rather reported as part of the overall seasonal flood situation. As a result, Lebanon generally lacks policies, strategies, and plans relating specifically to flash floods. Main objective of this research is to improve flash flood prediction by providing new knowledge and better understanding of the hydrological processes governing flash floods in the East Catchments of El Assi River. This includes developing rainstorm time distribution curves that are unique for this type of study region; analyzing, investigating, and developing a relationship between arid watershed characteristics (including urbanization) and nearby villages flow flood frequency in Ras Baalbeck and Fekha. This paper discusses different levels of integration approach¬es between GIS and hydrological models (HEC-HMS & HEC-RAS) and presents a case study, in which all the tasks of creating model input, editing data, running the model, and displaying output results. The study area corresponds to the East Basin (Ras Baalbeck & Fakeha), comprising nearly 350 km2 and situated in the Bekaa Valley of Lebanon. The case study presented in this paper has a database which is derived from Lebanese Army topographic maps for this region. Using ArcMap to digitizing the contour lines, streams & other features from the topographic maps. The digital elevation model grid (DEM) is derived for the study area. The next steps in this research are to incorporate rainfall time series data from Arseal, Fekha and Deir El Ahmar stations to build a hydrologic data model within a GIS environment and to combine ArcGIS/ArcMap, HEC-HMS & HEC-RAS models, in order to produce a spatial-temporal model for floodplain analysis at a regional scale. In this study, HEC-HMS and SCS methods were chosen to build the hydrologic model of the watershed. The model then calibrated using flood event that occurred between 7th & 9th of May 2014 which considered exceptionally extreme because of the length of time the flows lasted (15 hours) and the fact that it covered both the watershed of Aarsal and Ras Baalbeck. The strongest reported flood in recent times lasted for only 7 hours covering only one watershed. The calibrated hydrologic model is then used to build the hydraulic model & assessing of flood hazards maps for the region. HEC-RAS Model is used in this issue & field trips were done for the catchments in order to calibrated both Hydrologic and Hydraulic models. The presented models are a kind of flexible procedures for an ungaged watershed. For some storm events it delivers good results, while for others, no parameter vectors can be found. In order to have a general methodology based on these ideas, further calibration and compromising of results on the dependence of many flood events parameters and catchment properties is required.Keywords: flood risk management, flash flood, semi arid region, El Assi River, hazard maps
Procedia PDF Downloads 4785094 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System
Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa
Abstract:
Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)
Procedia PDF Downloads 309