Search results for: vulnerability prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2837

Search results for: vulnerability prediction

647 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: data fusion, Gaussian process regression, signal denoise, temporal extrapolation

Procedia PDF Downloads 125
646 Probabilistic Damage Tolerance Methodology for Solid Fan Blades and Discs

Authors: Andrej Golowin, Viktor Denk, Axel Riepe

Abstract:

Solid fan blades and discs in aero engines are subjected to high combined low and high cycle fatigue loads especially around the contact areas between blade and disc. Therefore, special coatings (e.g. dry film lubricant) and surface treatments (e.g. shot peening or laser shock peening) are applied to increase the strength with respect to combined cyclic fatigue and fretting fatigue, but also to improve damage tolerance capability. The traditional deterministic damage tolerance assessment based on fracture mechanics analysis, which treats service damage as an initial crack, often gives overly conservative results especially in the presence of vibratory stresses. A probabilistic damage tolerance methodology using crack initiation data has been developed for fan discs exposed to relatively high vibratory stresses in cross- and tail-wind conditions at certain resonance speeds for limited time periods. This Monte-Carlo based method uses a damage databank from similar designs, measured vibration levels at typical aircraft operations and wind conditions and experimental crack initiation data derived from testing of artificially damaged specimens with representative surface treatment under combined fatigue conditions. The proposed methodology leads to a more realistic prediction of the minimum damage tolerance life for the most critical locations applicable to modern fan disc designs.

Keywords: combined fatigue, damage tolerance, engine, surface treatment

Procedia PDF Downloads 462
645 Artificial Intelligence in Melanoma Prognosis: A Narrative Review

Authors: Shohreh Ghasemi

Abstract:

Introduction: Melanoma is a complex disease with various clinical and histopathological features that impact prognosis and treatment decisions. Traditional methods of melanoma prognosis involve manual examination and interpretation of clinical and histopathological data by dermatologists and pathologists. However, the subjective nature of these assessments can lead to inter-observer variability and suboptimal prognostic accuracy. AI, with its ability to analyze vast amounts of data and identify patterns, has emerged as a promising tool for improving melanoma prognosis. Methods: A comprehensive literature search was conducted to identify studies that employed AI techniques for melanoma prognosis. The search included databases such as PubMed and Google Scholar, using keywords such as "artificial intelligence," "melanoma," and "prognosis." Studies published between 2010 and 2022 were considered. The selected articles were critically reviewed, and relevant information was extracted. Results: The review identified various AI methodologies utilized in melanoma prognosis, including machine learning algorithms, deep learning techniques, and computer vision. These techniques have been applied to diverse data sources, such as clinical images, dermoscopy images, histopathological slides, and genetic data. Studies have demonstrated the potential of AI in accurately predicting melanoma prognosis, including survival outcomes, recurrence risk, and response to therapy. AI-based prognostic models have shown comparable or even superior performance compared to traditional methods.

Keywords: artificial intelligence, melanoma, accuracy, prognosis prediction, image analysis, personalized medicine

Procedia PDF Downloads 59
644 The Signaling Power of ESG Accounting in Sub-Sahara Africa: A Dynamic Model Approach

Authors: Haruna Maama

Abstract:

Environmental, social and governance (ESG) reporting is gaining considerable attention despite being voluntary. Meanwhile, it consumes resources to provide ESG reporting, raising a question of its value relevance. The study examined the impact of ESG reporting on the market value of listed firms in SSA. The annual and integrated reports of 276 listed sub-Sahara Africa (SSA) firms. The integrated reporting scores of the firm were analysed using a content analysis method. A multiple regression estimation technique using a GMM approach was employed for the analysis. The results revealed that ESG has a positive relationship with firms’ market value, suggesting that investors are interested in the ESG information disclosure of firms in SSA. This suggests that extensive ESG disclosures are attempts by firms to obtain the approval of powerful social, political and environmental stakeholders, especially institutional investors. Furthermore, the market value analysis evidence is consistent with signalling theory, which postulates that firms provide integrated reports as a signal to influence the behaviour of stakeholders. This finding reflects the value placed on investors' social, environmental and governance disclosures, which affirms the views that conventional investors would care about the social, environmental and governance issues of their potential or existing investee firms. Overall, the evidence is consistent with the prediction of signalling theory. In the context of this theory, integrated reporting is seen as part of firms' overall competitive strategy to influence investors' behaviour. The findings of this study make unique contributions to knowledge and practice in corporate reporting.

Keywords: environmental accounting, ESG accounting, signalling theory, sustainability reporting, sub-saharan Africa

Procedia PDF Downloads 55
643 The Challenge of Assessing Social AI Threats

Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi

Abstract:

The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.

Keywords: social threats, artificial Intelligence, mitigation, social experiment

Procedia PDF Downloads 46
642 A Non-Linear Eddy Viscosity Model for Turbulent Natural Convection in Geophysical Flows

Authors: J. P. Panda, K. Sasmal, H. V. Warrior

Abstract:

Eddy viscosity models in turbulence modeling can be mainly classified as linear and nonlinear models. Linear formulations are simple and require less computational resources but have the disadvantage that they cannot predict actual flow pattern in complex geophysical flows where streamline curvature and swirling motion are predominant. A constitutive equation of Reynolds stress anisotropy is adopted for the formulation of eddy viscosity including all the possible higher order terms quadratic in the mean velocity gradients, and a simplified model is developed for actual oceanic flows where only the vertical velocity gradients are important. The new model is incorporated into the one dimensional General Ocean Turbulence Model (GOTM). Two realistic oceanic test cases (OWS Papa and FLEX' 76) have been investigated. The new model predictions match well with the observational data and are better in comparison to the predictions of the two equation k-epsilon model. The proposed model can be easily incorporated in the three dimensional Princeton Ocean Model (POM) to simulate a wide range of oceanic processes. Practically, this model can be implemented in the coastal regions where trasverse shear induces higher vorticity, and for prediction of flow in estuaries and lakes, where depth is comparatively less. The model predictions of marine turbulence and other related data (e.g. Sea surface temperature, Surface heat flux and vertical temperature profile) can be utilized in short term ocean and climate forecasting and warning systems.

Keywords: Eddy viscosity, turbulence modeling, GOTM, CFD

Procedia PDF Downloads 181
641 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran

Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard

Abstract:

Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.

Keywords: data mining, ischemic stroke, decision tree, Bayesian network

Procedia PDF Downloads 155
640 Assessing Effects of an Intervention on Bottle-Weaning and Reducing Daily Milk Intake from Bottles in Toddlers Using Two-Part Random Effects Models

Authors: Yungtai Lo

Abstract:

Two-part random effects models have been used to fit semi-continuous longitudinal data where the response variable has a point mass at 0 and a continuous right-skewed distribution for positive values. We review methods proposed in the literature for analyzing data with excess zeros. A two-part logit-log-normal random effects model, a two-part logit-truncated normal random effects model, a two-part logit-gamma random effects model, and a two-part logit-skew normal random effects model were used to examine effects of a bottle-weaning intervention on reducing bottle use and daily milk intake from bottles in toddlers aged 11 to 13 months in a randomized controlled trial. We show in all four two-part models that the intervention promoted bottle-weaning and reduced daily milk intake from bottles in toddlers drinking from a bottle. We also show that there are no differences in model fit using either the logit link function or the probit link function for modeling the probability of bottle-weaning in all four models. Furthermore, prediction accuracy of the logit or probit link function is not sensitive to the distribution assumption on daily milk intake from bottles in toddlers not off bottles.

Keywords: two-part model, semi-continuous variable, truncated normal, gamma regression, skew normal, Pearson residual, receiver operating characteristic curve

Procedia PDF Downloads 333
639 Effect of Particle Aspect Ratio and Shape Factor on Air Flow inside Pulmonary Region

Authors: Pratibha, Jyoti Kori

Abstract:

Particles in industry, harvesting, coal mines, etc. may not necessarily be spherical in shape. In general, it is difficult to find perfectly spherical particle. The prediction of movement and deposition of non spherical particle in distinct airway generation is much more difficult as compared to spherical particles. Moreover, there is extensive inflexibility in deposition between ducts of a particular generation and inside every alveolar duct since particle concentrations can be much bigger than the mean acinar concentration. Consequently, a large number of particles fail to be exhaled during expiration. This study presents a mathematical model for the movement and deposition of those non-spherical particles by using particle aspect ratio and shape factor. We analyse the pulsatile behavior underneath sinusoidal wall oscillation due to periodic breathing condition through a non-Darcian porous medium or inside pulmonary region. Since the fluid is viscous and Newtonian, the generalized Navier-Stokes equation in two-dimensional coordinate system (r, z) is used with boundary-layer theory. Results are obtained for various values of Reynolds number, Womersley number, Forchsheimer number, particle aspect ratio and shape factor. Numerical computation is done by using finite difference scheme for very fine mesh in MATLAB. It is found that the overall air velocity is significantly increased by changes in aerodynamic diameter, aspect ratio, alveoli size, Reynolds number and the pulse rate; while velocity is decreased by increasing Forchheimer number.

Keywords: deposition, interstitial lung diseases, non-Darcian medium, numerical simulation, shape factor

Procedia PDF Downloads 161
638 Immature Platelet Fraction and Immature Reticulocyte Fraction as Early Predictors of Hematopoietic Recovery Post Stem Cell Transplantation

Authors: Aditi Mittal, Nishit Gupta, Tina Dadu, Anil Handoo

Abstract:

Introduction: Hematopoietic stem cell transplantation (HSCT) is a curative treatment done for hematologic malignancies and other clinical conditions. Its main objective is to reconstitute the hematopoietic system of the recipient by administering an infusion of donor hematopoietic stem cells. Transplant engraftment is the first sign of bone marrow recovery. The main objective of this study is to assess immature platelet fraction (IPF) and immature reticulocyte fraction (IRF) as early indicators of post-hematopoietic stem cell transplant engraftment. Methods: Patients of all age groups and both genders undergoing both autologous and allogeneic transplants were included in the study. All the CBC samples were run on Mindray CAL-8000 (BC-6800 plus; Shenzhen, China) analyser and assessed for IPF and IRF. Neutrophil engraftment was defined as the first of three consecutive days with an ANC >0.5 x 109/L and platelet engraftment with a count >20 x 109/L. The cut-off values for IRF were calculated as 13.5% with a CV of 5% and for IPF was 19% with a CV of 12%. Results: The study sample comprised 200 patients, of whom 116 had undergone autologous HSCT, and 84 had undergone allogeneic HSCT. We observed that IRF anticipated the neutrophil recovery by an average of 5 days prior to IPF. Though there was no significant variation in IPF and IRF for the prediction of platelet recovery, IRF was preceded by 1 or 2 days to IPF in 25% of cases. Conclusions: Both IPF and IRF can be used as reliable parameters as predictors for post-transplant engraftment; however, IRF seems to be more reliable than IPF as a simple, inexpensive, and widely available tool for predicting marrow recovery several days before engraftment.

Keywords: transplantation, stem cells, reticulocyte, engraftment

Procedia PDF Downloads 75
637 Exploring Antifragility Principles in Humanitarian Supply Chain: The key Role of Information Systems

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

The COVID-19 pandemic has been a major and global disruption that has affected all supply chains on a worldwide scale. Consequently, the question posed by this communication is to understand how - in the face of such disruptions - supply chains, including their actors, management tools, and processes, react, survive, adapt, and even improve. To do so, the concepts of resilience and antifragility applied to a supply chain have been leveraged. This article proposes to perceive resilience as a step to surpass in moving towards antifragility. The research objective is to propose an analytical framework to measure and compare resilience and antifragility, with antifragility seen as a property of a system that improves when subjected to disruptions rather than merely resisting these disruptions, as is the case with resilience. A unique case study was studied - MSF logistics (France) - using a qualitative methodology. Semi-structured interviews were conducted in person and remotely in multiple phases: during and immediately after the COVID crisis (8 interviews from March 2020 to April 2021), followed by a new round from September to November 2023. A Delphi method was employed. The interviews were analyzed using coding and a thematic framework. One of the theoretical contributions is consolidating the field of supply chain resilience research by precisely characterizing the dimensions of resilience for a humanitarian supply chain (Reorganization, Collaboration mediated by IS, Humanitarian culture). In this regard, a managerial contribution of this study is providing a guide for managers to identify the four dimensions and sub-dimensions of supply chain resilience. This enables managers to focus their decisions and actions on dimensions that will enhance resilience. Most importantly, another contribution is comparing the concepts of resilience and antifragility and proposing an analytical framework for antifragility—namely, the mechanisms on which MSF logistics relied to capitalize on uncertainties, contingencies, and shocks rather than simply enduring them. For MSF Logistics, antifragility manifested through the ability to identify opportunities hidden behind the uncertainties and shocks of COVID-19, reducing vulnerability, and fostering a culture that encourages innovation and the testing of new ideas. Logistics, particularly in the humanitarian domain, must be able to adapt to environmental disruptions. In this sense, this study identifies and characterizes the dimensions of resilience implemented by humanitarian logistics. Moreover, this research goes beyond the concept of resilience to propose an analytical framework for the concept of antifragility. The organization studied emerged stronger from the COVID-19 crisis due to the mechanisms we identified, allowing us to characterize antifragility. Finally, the results show that the information system plays a key role in antifragility.

Keywords: antifragility, humanitarian supply chain, information systems, qualitative research, resilience.

Procedia PDF Downloads 52
636 Numerical Investigation on Feasibility of Electromagnetic Wave as Water Hardness Detection in Water Cooling System Industrial

Authors: K. H. Teng, A. Shaw, M. Ateeq, A. Al-Shamma'a, S. Wylie, S. N. Kazi, B. T. Chew

Abstract:

Numerical and experimental of using novel electromagnetic wave technique to detect water hardness concentration has been presented in this paper. Simulation is powerful and efficient engineering methods which allow for a quick and accurate prediction of various engineering problems. The RF module is used in this research to predict and design electromagnetic wave propagation and resonance effect of a guided wave to detect water hardness concentration in term of frequency domain, eigenfrequency, and mode analysis. A cylindrical cavity resonator is simulated and designed in the electric field of fundamental mode (TM010). With the finite volume method, the three-dimensional governing equations were discretized. Boundary conditions for the simulation were the cavity materials like aluminum, two ports which include transmitting and receiving port, and assumption of vacuum inside the cavity. The design model was success to simulate a fundamental mode and extract S21 transmission signal within 2.1 – 2.8 GHz regions. The signal spectrum under effect of port selection technique and dielectric properties of different water concentration were studied. It is observed that the linear increment of magnitude in frequency domain when concentration increase. The numerical results were validated closely by the experimentally available data. Hence, conclusion for the available COMSOL simulation package is capable of providing acceptable data for microwave research.

Keywords: electromagnetic wave technique, frequency domain, signal spectrum, water hardness concentration

Procedia PDF Downloads 254
635 The Role of Androgens in Prediction of Success in Smoking Cessation in Women

Authors: Michaela Dušková, Kateřina Šimůnková, Martin Hill, Hana Hruškovičová, Hana Pospíšilová, Eva Králíková, Luboslav Stárka

Abstract:

Smoking represents the most widespread substance dependence in the world. Several studies show the nicotine's ability to alter women hormonal homeostasis. Women smokers have higher testosterone and lower estradiol levels throughout life compared to non-smoker women. We monitored the effect of smoking discontinuation on steroid spectrum with 40 premenopausal and 60 postmenopausal women smokers. These women had been examined before they discontinued smoking and also after 6, 12, 24, and 48 weeks of abstinence. At each examination, blood was collected to determine steroid spectrum (measured by GC-MS), LH, FSH, and SHBG (measured by IRMA). Repeated measures ANOVA model was used for evaluation of the data. The study has been approved by the local Ethics Committee. Given the small number of premenopausal women who endured not to smoke, only the first 6 week period data could be analyzed. A slight increase in androgens after the smoking discontinuation occurred. In postmenopausal women, an increase in testosterone, dihydrotestosterone, dehydroepiandrosterone, and other androgens occurred, too. Nicotine replacement therapy, weight changes, and age does not play any role in the androgen level increase. The higher androgens levels correlated with failure in smoking cessation. Women smokers have higher androgen levels, which might play a role in smoking dependence development. Women successful in smoking cessation, compared to the non-successful ones, have lower androgen levels initially and also after smoking discontinuation. The question is what androgen levels women have before they start smoking.

Keywords: addiction, smoking, cessation, androgens

Procedia PDF Downloads 366
634 Ten Patterns of Organizational Misconduct and a Descriptive Model of Interactions

Authors: Ali Abbas

Abstract:

This paper presents a descriptive model of organizational misconduct based on observed patterns that occur before and after an ethical collapse. The patterns were classified by categorizing media articles in both "for-profit" and "not-for-profit" organizations. Based on the model parameters, the paper provides a descriptive model of various organizational deflection strategies under numerous scenarios, including situations where ethical complaints build-up, situations under which whistleblowers become more prevalent, situations where large scandals that relate to leadership occur, and strategies by which organizations deflect blame when pressure builds up or when media finds out. The model parameters start with the premise of a tolerance to double standards in unethical acts when conducted by leadership or by members of corporate governance. Following this premise, the model explains how organizations engage in discursive strategies to cover up the potential conflicts that arise, including secret agreements and weakening stakeholders who may oppose the organizational acts. Deflection strategies include "preemptive" and "post-complaint" secret agreements, absence of (or vague) documented procedures, engaging in blame and scapegoating, remaining silent on complaints until the media finds out, as well as being slow (if at all) to acknowledge misconduct and fast to cover it up. The results of this paper may be used to guide organizational leaders into the implications of such shortsighted strategies toward unethical acts, even if they are deemed legal. Validation of the model assumptions through numerous media articles is provided.

Keywords: ethical decision making, prediction, scandals, organizational strategies

Procedia PDF Downloads 103
633 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach

Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma

Abstract:

Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.

Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX

Procedia PDF Downloads 113
632 Mobile Technology Use by People with Learning Disabilities: A Qualitative Study

Authors: Peter Williams

Abstract:

Mobile digital technology, in the form of smart phones, tablets, laptops and their accompanying functionality/apps etc., is becoming ever more used by people with Learning Disabilities (LD) - for entertainment, to communicate and socialize, and enjoy self-expression. Despite this, there has been very little research into the experiences of such technology by this cohort, it’s role in articulating personal identity and self-advocacy and the barriers encountered in negotiating technology in everyday life. The proposed talk describes research funded by the British Academy addressing these issues. It aims to explore: i) the experiences of people with LD in using mobile technology in their everyday lives – the benefits, in terms of entertainment, self-expression and socialising, and possible greater autonomy; and the barriers, such as accessibility or usability issues, privacy or vulnerability concerns etc. ii) how the technology, and in particular the software/apps and interfaces, can be improved to enable the greater access to entertainment, information, communication and other benefits it can offer. It is also hoped that results will inform parents, carers and other supporters regarding how they can use the technology with their charges. Rather than the project simply following the standard research procedure of gathering and analysing ‘data’ to which individual ‘research subjects’ have no access, people with Learning Disabilities (and their supporters) will help co-produce an accessible, annotated and hyperlinked living e-archive of their experiences. Involving people with LD as informants, contributors and, in effect, co-researchers will facilitate digital inclusion and empowerment. The project is working with approximately 80 adults of all ages who have ‘mild’ learning disabilities (people who are able to read basic texts and write simple sentences). A variety of methods is being used. Small groups of participants have engaged in simple discussions or storytelling about some aspect of technology (such as ‘when my phone saved me’ or ‘my digital photos’ etc.). Some individuals have been ‘interviewed’ at a PC, laptop or with a mobile device etc., and asked to demonstrate their usage and interests. Social media users have shown their Facebook pages, Pinterest uploads or other material – giving them an additional focus they have used to discuss their ‘digital’ lives. During these sessions, participants have recorded (or employed the researcher to record) their observations on to the e-archive. Parents, carers and other supporters are also being interviewed to explore their experiences of using mobile technology with the cohort, including any difficulties they have observed their charges having. The archive is supplemented with these observations. The presentation will outline the methods described above, highlighting some of the special considerations required when working inclusively with people with LD. It will describe some of the preliminary findings and demonstrate the e-archive with a commentary on the pages shown.

Keywords: inclusive research, learning disabilities, methods, technology

Procedia PDF Downloads 210
631 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification

Procedia PDF Downloads 130
630 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: computational fluid dynamics (CFD), erosion, slurry transportation, k-ε Model

Procedia PDF Downloads 394
629 Access to Education and Adopted Identity of the Rohingya Amid Government Restrictions in Bangladesh

Authors: Ishrat Zakia Sultana

Abstract:

The consistent persecution, ethnic cleansing, and genocide against the Rohingya in Burma resulted four major influxes of the Rohingya people to the neighboring country Bangladesh. After the latest influx of October 2016 and August 2017, the total number of Rohingya in Bangladesh stands somewhere between 900,000 to over one million, placing Bangladesh much ahead with the number of refugees compared to Dadaab and Kakuma in Kenya, Bidibidi in Uganda and Zaatari in Jordan. While Bangladesh received recognition and appreciation for receiving a huge number of the Rohingya, one of the fundamental human rights of the Rohingya – education – has never been fulfilled in Bangladesh. The Ministry of Disaster Management and Relief of the government of Bangladesh has been looking after the Rohingya and managing various programs for the Rohingya. On its website, the Ministry claims that it provides the basic supports/services to the Rohingya, including providing education. In practice, however, education for the Rohingya include only the provisions for registered Rohingya refugees – who are a very small number of populations among the entire Rohingya hosted in Bangladesh – and that is only up to grade 7 within the registered camps at Teknaf and Ukhia of Cox’s Bazar district of the country. There is no answer of the question, ‘What’s next’? Although refugees in Canada, Sudan, Turkey and other countries have been allowed to go to mainstream schools, Rohingya refugees in Bangladesh are not allowed to do so legally. Due to the lack of proof of nationality of the Rohingya, the government of Bangladesh imposes restrictions on their access to Bangladeshi schools. However, despite their vulnerability and statelessness, many Rohingyas are desperate to pursue education outside the camps and find their own way not only within Cox’s Bazar but also even in the capital city of the country. But they must hide their refugee identity to accomplish this. My research aims to explore how they manage to get admission amid government restrictions on their access to education in the mainstream institutions in Bangladesh. It will reveal how Rohingya people use adopted identity to get access to education in Bangladesh, and how they apply their own techniques to achieve their goals without having government approved identity. This research examined the strategies the Rohingya applied to manage documents related to their identity to ensure their admission to Bangladeshi education institutions – in schools, colleges, and universities. The research employed a qualitative approach. It used semi structured individual interviews and Focused Group Discussions (FGDs) with 20 male and female Rohingya refugees who are 18 years old and above, and have enrolled in Bangladeshi education institutions with adopted identity. Also I interviewed 5 local community members and policy makers to understand their perceptions and roles in this process. The findings of this research will allow the policy makers to rethink the outcomes of the restrictions on Rohingya’s education in Bangladesh, the ramifications of the denial of Rohingya’s access to education, and initiate policy dialogues on how to allow Rohingya refugees to pursue education in Bangladesh in legal way.

Keywords: Rohingya, Refugee, Bangladesh, Education

Procedia PDF Downloads 43
628 Numerical Approach for Characterization of Flow Field in Pump Intake Using Two Phase Model: Detached Eddy Simulation

Authors: Rahul Paliwal, Gulshan Maheshwari, Anant S. Jhaveri, Channamallikarjun S. Mathpati

Abstract:

Large pumping facility is the necessary requirement of the cooling water systems for power plants, process and manufacturing facilities, flood control and water or waste water treatment plant. With a large capacity of few hundred to 50,000 m3/hr, cares must be taken to ensure the uniform flow to the pump to limit vibration, flow induced cavitation and performance problems due to formation of air entrained vortex and swirl flow. Successful prediction of these phenomena requires numerical method and turbulence model to characterize the dynamics of these flows. In the past years, single phase shear stress transport (SST) Reynolds averaged Navier Stokes Models (like k-ε, k-ω and RSM) were used to predict the behavior of flow. Literature study showed that two phase model will be more accurate over single phase model. In this paper, a 3D geometries simulated using detached eddy simulation (LES) is used to predict the behavior of the fluid and the results are compared with experimental results. Effect of different grid structure and boundary condition is also studied. It is observed that two phase flow model can more accurately predict the mean flow and turbulence statistics compared to the steady SST model. These validate model will be used for further analysis of vortex structure in lab scale model to generate their frequency-plot and intensity at different location in the set-up. This study will help in minimizing the ill effect of vortex on pump performance.

Keywords: grid structure, pump intake, simulation, vibration, vortex

Procedia PDF Downloads 161
627 Budget Optimization for Maintenance of Bridges in Egypt

Authors: Hesham Abd Elkhalek, Sherif M. Hafez, Yasser M. El Fahham

Abstract:

Allocating limited budget to maintain bridge networks and selecting effective maintenance strategies for each bridge represent challenging tasks for maintenance managers and decision makers. In Egypt, bridges are continuously deteriorating. In many cases, maintenance works are performed due to user complaints. The objective of this paper is to develop a practical and reliable framework to manage the maintenance, repair, and rehabilitation (MR&R) activities of Bridges network considering performance and budget limits. The model solves an optimization problem that maximizes the average condition of the entire network given the limited available budget using Genetic Algorithm (GA). The framework contains bridge inventory, condition assessment, repair cost calculation, deterioration prediction, and maintenance optimization. The developed model takes into account multiple parameters including serviceability requirements, budget allocation, element importance on structural safety and serviceability, bridge impact on network, and traffic. A questionnaire is conducted to complete the research scope. The proposed model is implemented in software, which provides a friendly user interface. The framework provides a multi-year maintenance plan for the entire network for up to five years. A case study of ten bridges is presented to validate and test the proposed model with data collected from Transportation Authorities in Egypt. Different scenarios are presented. The results are reasonable, feasible and within acceptable domain.

Keywords: bridge management systems (BMS), cost optimization condition assessment, fund allocation, Markov chain

Procedia PDF Downloads 273
626 Fixed Point Iteration of a Damped and Unforced Duffing's Equation

Authors: Paschal A. Ochang, Emmanuel C. Oji

Abstract:

The Duffing’s Equation is a second order system that is very important because they are fundamental to the behaviour of higher order systems and they have applications in almost all fields of science and engineering. In the biological area, it is useful in plant stem dependence and natural frequency and model of the Brain Crash Analysis (BCA). In Engineering, it is useful in the study of Damping indoor construction and Traffic lights and to the meteorologist it is used in the prediction of weather conditions. However, most Problems in real life that occur are non-linear in nature and may not have analytical solutions except approximations or simulations, so trying to find an exact explicit solution may in general be complicated and sometimes impossible. Therefore we aim to find out if it is possible to obtain one analytical fixed point to the non-linear ordinary equation using fixed point analytical method. We started by exposing the scope of the Duffing’s equation and other related works on it. With a major focus on the fixed point and fixed point iterative scheme, we tried different iterative schemes on the Duffing’s Equation. We were able to identify that one can only see the fixed points to a Damped Duffing’s Equation and not to the Undamped Duffing’s Equation. This is because the cubic nonlinearity term is the determining factor to the Duffing’s Equation. We finally came to the results where we identified the stability of an equation that is damped, forced and second order in nature. Generally, in this research, we approximate the solution of Duffing’s Equation by converting it to a system of First and Second Order Ordinary Differential Equation and using Fixed Point Iterative approach. This approach shows that for different versions of Duffing’s Equations (damped), we find fixed points, therefore the order of computations and running time of applied software in all fields using the Duffing’s equation will be reduced.

Keywords: damping, Duffing's equation, fixed point analysis, second order differential, stability analysis

Procedia PDF Downloads 266
625 Liquid Biopsy and Screening Biomarkers in Glioma Grading

Authors: Abdullah Abdu Qaseem Shamsan

Abstract:

Background: Gliomas represent the most frequent, heterogeneous group of tumors arising from glial cells, characterized by difficult monitoring, poor prognosis, and fatality. Tissue biopsy is an established procedure for tumor cell sampling that aids diagnosis, tumor grading, and prediction of prognosis. We studied and compared the levels of liquid biopsy markers in patients with different grades of glioma. Also, it tried to establish the potential association between glioma and specific blood groups antigen. Result: 78 patients were identified, among whom maximum percentage with glioblastoma possessed blood group O+ (53.8%). The second highest frequency had blood group A+ (20.4%), followed by B+ (9.0%) and A- (5.1%), and least with O-. Liquid biopsy biomarkers comprised of ALT, LDH, lymphocytes, Urea, Alkaline phosphatase, AST Neutrophils, and CRP. The levels of all the components increased significantly with the severity of glioma, with maximum levels seen in glioblastoma (grade IV), followed by grade III and grade II respectively. Conclusion: Gliomas possess significant clinical challenges due to their progression with heterogeneous nature and aggressive behavior. Liquid biopsy is a non-invasive approach which aids to establish the status of the patient and determine the tumor grade, therefore may show diagnostic and prognostic utility. Additionally, our study provides evidence to demonstrate the role of ABO blood group antigens in the development of glioma. However, future clinical research on liquid biopsy will improve the sensitivity and specificity of these tests and validate their clinical usefulness to guide treatment approaches.

Keywords: GBM: glioblastoma multiforme, CT: computed tomography, MRI: magnetic resonance imaging, ctRNA: circulating tumor RNA

Procedia PDF Downloads 30
624 Computational Fluid Dynamics Modeling of Flow Properties Fluctuations in Slug-Churn Flow through Pipe Elbow

Authors: Nkemjika Chinenye-Kanu, Mamdud Hossain, Ghazi Droubi

Abstract:

Prediction of multiphase flow induced forces, void fraction and pressure is crucial at both design and operating stages of practical energy and process pipe systems. In this study, transient numerical simulations of upward slug-churn flow through a vertical 90-degree elbow have been conducted. The volume of fluid (VOF) method was used to model the two-phase flows while the K-epsilon Reynolds-Averaged Navier-Stokes (RANS) equations were used to model turbulence in the flows. The simulation results were validated using experimental results. Void fraction signal, peak frequency and maximum magnitude of void fraction fluctuation of the slug-churn flow validation case studies compared well with experimental results. The x and y direction force fluctuation signals at the elbow control volume were obtained by carrying out force balance calculations using the directly extracted time domain signals of flow properties through the control volume in the numerical simulation. The computed force signal compared well with experiment for the slug and churn flow validation case studies. Hence, the present numerical simulation technique was able to predict the behaviours of the one-way flow induced forces and void fraction fluctuations.

Keywords: computational fluid dynamics, flow induced vibration, slug-churn flow, void fraction and force fluctuation

Procedia PDF Downloads 142
623 Contemporary Challenges in Public Relations in the Context of Globalization

Authors: Marine Kobalava, Eter Narimanishvili, Nino Grigolaia

Abstract:

The paper analyzes the contemporary problems of public relations in Georgia. The approaches to public attitudes towards the relationship with the population of the country are studied on a global scale, the importance of forming the concept of public relations in Georgia in terms of globalization is justified. The basic components of public relations are characterized by the RACE system, namely analyzing research, action, communication, evaluation. The main challenges of public relations are identified in the research process; taking into consideration the scope of globalization, the influence of social, economic, and political changes in Georgia on PR development are identified. The article discusses the public relations as the strategic management function that facilitates communication with the society, recognition of public interests, and their prediction. In addition, the feminization of the sector is considered to be the most important achievement of public relations in the modern world. The conclusion is that the feminization indicator of the field is an unconditional increase in the employment rates of women. In the paper, the problems of globalization and public relations in the industrial countries are studied, the directions of improvement of public relations with the background of peculiarities of different countries and globalization process are proposed. Public relations under globalization are assessed in accordance with the theory of benefits and requirements, and the requirements are classified according to informational, self-identification, integration, social interaction, and other types of signs. In the article, conclusions on the current challenges of public relations in Georgia are made, and the recommendations for their solution, taking into consideration globalization processes in the world, are proposed.

Keywords: public relations, globalization, RACE system, public relationship concept, feminization

Procedia PDF Downloads 148
622 A Literature Review on Emotion Recognition Using Wireless Body Area Network

Authors: Christodoulou Christos, Politis Anastasios

Abstract:

The utilization of Wireless Body Area Network (WBAN) is experiencing a notable surge in popularity as a result of its widespread implementation in the field of smart health. WBANs utilize small sensors implanted within the human body to monitor and record physiological indicators. These sensors transmit the collected data to hospitals and healthcare facilities through designated access points. Bio-sensors exhibit a diverse array of shapes and sizes, and their deployment can be tailored to the condition of the individual. Multiple sensors may be strategically placed within, on, or around the human body to effectively observe, record, and transmit essential physiological indicators. These measurements serve as a basis for subsequent analysis, evaluation, and therapeutic interventions. In conjunction with physical health concerns, numerous smartwatches are engineered to employ artificial intelligence techniques for the purpose of detecting mental health conditions such as depression and anxiety. The utilization of smartwatches serves as a secure and cost-effective solution for monitoring mental health. Physiological signals are widely regarded as a highly dependable method for the recognition of emotions due to the inherent inability of individuals to deliberately influence them over extended periods of time. The techniques that WBANs employ to recognize emotions are thoroughly examined in this article.

Keywords: emotion recognition, wireless body area network, WBAN, ERC, wearable devices, psychological signals, emotion, smart-watch, prediction

Procedia PDF Downloads 31
621 Prediction of Product Size Distribution of a Vertical Stirred Mill Based on Breakage Kinetics

Authors: C. R. Danielle, S. Erik, T. Patrick, M. Hugh

Abstract:

In the last decade there has been an increase in demand for fine grinding due to the depletion of coarse-grained orebodies and an increase of processing fine disseminated minerals and complex orebodies. These ores have provided new challenges in concentrator design because fine and ultra-fine grinding is required to achieve acceptable recovery rates. Therefore, the correct design of a grinding circuit is important for minimizing unit costs and increasing product quality. The use of ball mills for grinding in fine size ranges is inefficient and, therefore, vertical stirred grinding mills are becoming increasingly popular in the mineral processing industry due to its already known high energy efficiency. This work presents a hypothesis of a methodology to predict the product size distribution of a vertical stirred mill using a Bond ball mill. The Population Balance Model (PBM) was used to empirically analyze the performance of a vertical mill and a Bond ball mill. The breakage parameters obtained for both grinding mills are compared to determine the possibility of predicting the product size distribution of a vertical mill based on the results obtained from the Bond ball mill. The biggest advantage of this methodology is that most of the minerals processing laboratories already have a Bond ball mill to perform the tests suggested in this study. Preliminary results show the possibility of predicting the performance of a laboratory vertical stirred mill using a Bond ball mill.

Keywords: bond ball mill, population balance model, product size distribution, vertical stirred mill

Procedia PDF Downloads 264
620 The Importance of Clinicopathological Features for Differentiation Between Crohn's Disease and Ulcerative Colitis

Authors: Ghada E. Esheba, Ghadeer F. Alharthi, Duaa A. Alhejaili, Rawan E. Hudairy, Wafaa A. Altaezi, Raghad M. Alhejaili

Abstract:

Background: Inflammatory bowel disease (IBD) consists of two specific gastrointestinal disorders: ulcerative colitis (UC) and Crohn's disease (CD). Despite their distinct natures, these two diseases share many similar etiologic, clinical and pathological features, as a result, their accurate differential diagnosis may sometimes be difficult. Correct diagnosis is important because surgical treatment and long-term prognosis differ from UC and CD. Aim: This study aims to study the characteristic clinicopathological features which help in the differential diagnosis between UC and CD, and assess the disease activity in ulcerative colitis. Materials and methods: This study was carried out on 50 selected cases. The cases included 27 cases of UC and 23 cases of CD. All the cases were examined using H& E and immunohistochemically for bcl-2 expression. Results: Characteristic features of UC include: decrease in mucous content, irregular or villous surface, crypt distortion, and cryptitis, whereas the main cardinal histopathological features seen in CD were: epitheloid granuloma, transmural chronic inflammation, absence of mucin depletion, irregular surface, or crypt distortion. 3 cases of UC were found to be associated with dysplasia. UC mucosa contains fewer Bcl-2+ cells compared with CD mucosa. Conclusion: This study using multiple parameters such clinicopathological features and Bcl-2 expression as studied by immunohistochemical stain, helped to gain an accurate differentiation between UC and CD. Furthermore, this work spotted the light on the activity and different grades of UC which could be important for the prediction of relapse.

Keywords: Crohn's disease, dysplasia, inflammatory bowel disease, ulcerative colitis

Procedia PDF Downloads 174
619 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 24
618 Prediction of Pounding between Two SDOF Systems by Using Link Element Based On Mathematic Relations and Suggestion of New Equation for Impact Damping Ratio

Authors: Seyed M. Khatami, H. Naderpour, R. Vahdani, R. C. Barros

Abstract:

Many previous studies have been carried out to calculate the impact force and the dissipated energy between two neighboring buildings during seismic excitation, when they collide with each other. Numerical studies are an important part of impact, which several researchers have tried to simulate the impact by using different formulas. Estimation of the impact force and the dissipated energy depends significantly on some parameters of impact. Mass of bodies, stiffness of spring, coefficient of restitution, damping ratio of dashpot and impact velocity are some known and unknown parameters to simulate the impact and measure dissipated energy during collision. Collision is usually shown by force-displacement hysteresis curve. The enclosed area of the hysteresis loop explains the dissipated energy during impact. In this paper, the effect of using different types of impact models is investigated in order to calculate the impact force. To increase the accuracy of impact model and to optimize the results of simulations, a new damping equation is assumed and is validated to get the best results of impact force and dissipated energy, which can show the accuracy of suggested equation of motion in comparison with other formulas. This relation is called "n-m". Based on mathematical relation, an initial value is selected for the mentioned coefficients and kinetic energy loss is calculated. After each simulation, kinetic energy loss and energy dissipation are compared with each other. If they are equal, selected parameters are true and, if not, the constant of parameters are modified and a new analysis is performed. Finally, two unknown parameters are suggested to estimate the impact force and calculate the dissipated energy.

Keywords: impact force, dissipated energy, kinetic energy loss, damping relation

Procedia PDF Downloads 539