Search results for: impact models
14815 Improving the Quantification Model of Internal Control Impact on Banking Risks
Authors: M. Ndaw, G. Mendy, S. Ouya
Abstract:
Risk management in banking sector is a key issue linked to financial system stability and its importance has been elevated by technological developments and emergence of new financial instruments. In this paper, we improve the model previously defined for quantifying internal control impact on banking risks by automatizing the residual criticality estimation step of FMECA. For this, we defined three equations and a maturity coefficient to obtain a mathematical model which is tested on all banking processes and type of risks. The new model allows an optimal assessment of residual criticality and improves the correlation rate that has become 98%.Keywords: risk, control, banking, FMECA, criticality
Procedia PDF Downloads 33414814 Domestic Remittances, Household Enterprises, and Household Well-being in Ghana
Authors: Abdul-Majeed Imoro
Abstract:
This paper investigates the interactive effect of domestic remittances and household enterprises on household well-being in Ghana. The study employs data drawn from the seventh wave of the Ghana Living Standard Survey (GLSS 7) comprising 14,009 households located in 1,000 enumeration areas for the 2016/2017 period. This study employs the Ordinary Least Square (OLS) regression technique in estimating the interactive effect of domestic remittances and household enterprises on household well-being. The Linear Probability Model (LPM) is used to estimate the impact of domestic remittances on household enterprises. A Two-Stage Least Square (2SLS) model is employed to solve endogeneity issues between the dependent variable and the explanatory variable. This study reveals the following findings: domestic remittances improve household well-being significantly. Also, there is a significant negative impact of domestic remittances on household enterprises. This implies that households that receive domestic remittances are less likely to engage in household enterprises. Finally, the 2SLS results show a significant and positive impact of the interaction between domestic remittances and household enterprises on household well-being. This study provides empirical evidence of why policymakers need to encourage households that receive domestic remittances to diversify their income sources and invest in other income-generating activities such as household enterprises.Keywords: domestic remittances, household enterprises, household well-being, Ghana
Procedia PDF Downloads 2214813 Determination of Fatigue Limit in Post Impacted Carbon Fiber Reinforced Epoxy Polymer (CFRP) Specimens Using Self Heating Methodology
Authors: Deepika Sudevan, Patrick Rozycki, Laurent Gornet
Abstract:
This paper presents the experimental identification of the fatigue limit for pristine and impacted Carbon Fiber Reinforced Epoxy polymer (CFRP) woven composites based on the relatively new self-heating methodology for composites. CFRP composites of [0/90]8 and quasi isotropic configurations prepared using hand-layup technique are subjected to low energy impacts (20 J energy) simulating a barely visible impact damage (BVID). Runway debris strike, tool drop or hailstone impact can cause a BVID on an aircraft fuselage made of carbon composites and hence understanding the post-impact fatigue response of CFRP laminates is of immense importance to the aerospace community. The BVID zone on the specimens is characterized using X-ray Tomography technique. Both pristine and impacted specimens are subjected to several blocks of constant amplitude (CA) fatigue loading keeping R-ratio a constant but with increments in the mean loading stress after each block. The number of loading cycles in each block is a subjective parameter and it varies for pristine and impacted CFRP specimens. To monitor the temperature evolution during fatigue loading, thermocouples are pasted on the CFRP specimens at specific locations. The fatigue limit is determined by two strategies, first is by considering the stabilized temperature in every block and second is by considering the change in the temperature slope per block. The results show that both strategies can be adopted to determine the fatigue limit in both pristine and impacted CFRP composites.Keywords: CFRP, fatigue limit, low energy impact, self-heating, WRM
Procedia PDF Downloads 23214812 Effect of Large English Studies Classes on Linguistic Achievement and Classroom Discourse at Junior Secondary Level in Yobe State
Authors: Clifford Irikefe Gbeyonron
Abstract:
Applied linguists concur that there is low-level achievement in English language use among Nigerian secondary school students. One of the factors that exacerbate this is classroom feature of which large class size is obvious. This study investigated the impact of large classes on learning English as a second language (ESL) at junior secondary school (JSS) in Yobe State. To achieve this, Solomon four-group experimental design was used. 382 subjects were divided into four groups and taught ESL for thirteen weeks. 356 subjects wrote the post-test. Data from the systematic observation and post-test were analyzed via chi square and ANOVA. Results indicated that learners in large classes (LLC) attain lower linguistic progress than learners in small classes (LSC). Furthermore, LSC have more chances to access teacher evaluation and participate actively in classroom discourse than LLC. In consequence, large classes have adverse effects on learning ESL in Yobe State. This is inimical to English language education given that each learner of ESL has their individual peculiarity within each class. It is recommended that strategies that prioritize individualization, grouping, use of language teaching aides, and theorization of innovative models in respect of large classes be considered.Keywords: large classes, achievement, classroom discourse
Procedia PDF Downloads 40914811 A Geogpraphic Overview about Offshore Energy Cleantech in Portugal
Authors: Ana Pego
Abstract:
Environmental technologies were developed for decades. Clean technologies emerged a few years ago. In these perspectives, the use of cleantech technologies has become very important due the fact of new era of environmental feats. As such, the market itself has become more competitive, more collaborative towards a better use of clean technologies. This paper shows the importance of clean technologies in offshore energy sector in Portuguese market, its localization and its impact on economy. Clean technologies are directly related with renewable cluster and concomitant with economic and social resource optimization criteria, geographic aspects, climate change and soil features. Cleantech is related with regional development, socio-technical transitions in organisations. There are an economical and social combinations which allow specialisation of regions in activities, higher employment, reduce of energy costs, local knowledge spillover and, business collaboration and competitiveness. The methodology used will be quantitative (IO matrix for Portugal 2013) and qualitative (questionnaires to stakeholders). The mix of both methodologies will confirm whether the use of technologies will allow a positive impact on economic and social variables used on this model. It is expected a positive impact on Portuguese economy both in investment and employment taking in account the localization of offshore renewable activities. This means that the importance of offshore renewable investment in Portugal has a few points which should be pointed out: the increase of specialised employment, localization of specific activities in territory, and increase of value added in certain regions. The conclusion will allow researchers and organisation to compare the Portuguese model to other European regions in order to a better use of natural and human resources.Keywords: cleantech, economic impact, localisation, territory dynamics
Procedia PDF Downloads 22814810 Seismic Hazard Prediction Using Seismic Bumps: Artificial Neural Network Technique
Authors: Belkacem Selma, Boumediene Selma, Tourkia Guerzou, Abbes Labdelli
Abstract:
Natural disasters have occurred and will continue to cause human and material damage. Therefore, the idea of "preventing" natural disasters will never be possible. However, their prediction is possible with the advancement of technology. Even if natural disasters are effectively inevitable, their consequences may be partly controlled. The rapid growth and progress of artificial intelligence (AI) had a major impact on the prediction of natural disasters and risk assessment which are necessary for effective disaster reduction. The Earthquakes prediction to prevent the loss of human lives and even property damage is an important factor; that is why it is crucial to develop techniques for predicting this natural disaster. This present study aims to analyze the ability of artificial neural networks (ANNs) to predict earthquakes that occur in a given area. The used data describe the problem of high energy (higher than 10^4J) seismic bumps forecasting in a coal mine using two long walls as an example. For this purpose, seismic bumps data obtained from mines has been analyzed. The results obtained show that the ANN with high accuracy was able to predict earthquake parameters; the classification accuracy through neural networks is more than 94%, and that the models developed are efficient and robust and depend only weakly on the initial database.Keywords: earthquake prediction, ANN, seismic bumps
Procedia PDF Downloads 12714809 Efficient Energy Extraction Circuit for Impact Harvesting from High Impedance Sources
Authors: Sherif Keddis, Mohamed Azzam, Norbert Schwesinger
Abstract:
Harvesting mechanical energy from footsteps or other impacts is a possibility to enable wireless autonomous sensor nodes. These can be used for a highly efficient control of connected devices such as lights, security systems, air conditioning systems or other smart home applications. They can also be used for accurate location or occupancy monitoring. Converting the mechanical energy into useful electrical energy can be achieved using the piezoelectric effect offering simple harvesting setups and low deflections. The challenge facing piezoelectric transducers is the achievable amount of energy per impact in the lower mJ range and the management of such low energies. Simple setups for energy extraction such as a full wave bridge connected directly to a capacitor are problematic due to the mismatch between high impedance sources and low impedance storage elements. Efficient energy circuits for piezoelectric harvesters are commonly designed for vibration harvesters and require periodic input energies with predictable frequencies. Due to the sporadic nature of impact harvesters, such circuits are not well suited. This paper presents a self-powered circuit that avoids the impedance mismatch during energy extraction by disconnecting the load until the source reaches its charge peak. The switch is implemented with passive components and works independent from the input frequency. Therefore, this circuit is suited for impact harvesting and sporadic inputs. For the same input energy, this circuit stores 150% of the energy in comparison to a directly connected capacitor to a bridge rectifier. The total efficiency, defined as the ratio of stored energy on a capacitor to available energy measured across a matched resistive load, is 63%. Although the resulting energy is already sufficient to power certain autonomous applications, further optimization of the circuit are still under investigation in order to improve the overall efficiency.Keywords: autonomous sensors, circuit design, energy harvesting, energy management, impact harvester, piezoelectricity
Procedia PDF Downloads 15414808 Vibration Transmission across Junctions of Walls and Floors in an Apartment Building: An Experimental Investigation
Authors: Hugo Sampaio Libero, Max de Castro Magalhaes
Abstract:
The perception of sound radiated from a building floor is greatly influenced by the rooms in which it is immersed and by the position of both listener and source. The main question that remains unanswered is related to the influence of the source position on the sound power radiated by a complex wall-floor system in buildings. This research is concerned with the investigation of vibration transmission across walls and floors in buildings. It is primarily based on the determination of vibration reduction index via experimental tests. Knowledge of this parameter may help in predicting noise and vibration propagation in building components. First, the physical mechanisms involving vibration transmission across structural junctions are described. An experimental setup is performed to aid this investigation. The experimental tests have shown that the vibration generation in the walls and floors is directed related to their size and boundary conditions. It is also shown that the vibration source position can affect the overall vibration spectrum significantly. Second, the characteristics of the noise spectra inside the rooms due to an impact source (tapping machine) are also presented. Conclusions are drawn for the general trend of vibration and noise spectrum of the structural components and rooms, respectively. In summary, the aim of this paper is to investigate the vibro-acoustical behavior of building floors and walls under floor impact excitation. The impact excitation was at distinct positions on the slab. The analysis has highlighted the main physical characteristics of the vibration transmission mechanism.Keywords: vibration transmission, vibration reduction index, impact excitation, experimental tests
Procedia PDF Downloads 9314807 Large Language Model Powered Chatbots Need End-to-End Benchmarks
Authors: Debarag Banerjee, Pooja Singh, Arjun Avadhanam, Saksham Srivastava
Abstract:
Autonomous conversational agents, i.e., chatbots, are becoming an increasingly common mechanism for enterprises to provide support to customers and partners. In order to rate chatbots, especially ones powered by Generative AI tools like Large Language Models (LLMs), we need to be able to accurately assess their performance. This is where chatbot benchmarking becomes important. In this paper, authors propose the use of a benchmark that they call the E2E (End to End) benchmark and show how the E2E benchmark can be used to evaluate the accuracy and usefulness of the answers provided by chatbots, especially ones powered by LLMs. The authors evaluate an example chatbot at different levels of sophistication based on both our E2E benchmark as well as other available metrics commonly used in the state of the art and observe that the proposed benchmark shows better results compared to others. In addition, while some metrics proved to be unpredictable, the metric associated with the E2E benchmark, which uses cosine similarity, performed well in evaluating chatbots. The performance of our best models shows that there are several benefits of using the cosine similarity score as a metric in the E2E benchmark.Keywords: chatbot benchmarking, end-to-end (E2E) benchmarking, large language model, user centric evaluation.
Procedia PDF Downloads 6714806 The Effectiveness of Multiphase Flow in Well- Control Operations
Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia
Abstract:
Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic
Procedia PDF Downloads 11914805 Machine Learning Automatic Detection on Twitter Cyberbullying
Authors: Raghad A. Altowairgi
Abstract:
With the wide spread of social media platforms, young people tend to use them extensively as the first means of communication due to their ease and modernity. But these platforms often create a fertile ground for bullies to practice their aggressive behavior against their victims. Platform usage cannot be reduced, but intelligent mechanisms can be implemented to reduce the abuse. This is where machine learning comes in. Understanding and classifying text can be helpful in order to minimize the act of cyberbullying. Artificial intelligence techniques have expanded to formulate an applied tool to address the phenomenon of cyberbullying. In this research, machine learning models are built to classify text into two classes; cyberbullying and non-cyberbullying. After preprocessing the data in 4 stages; removing characters that do not provide meaningful information to the models, tokenization, removing stop words, and lowering text. BoW and TF-IDF are used as the main features for the five classifiers, which are; logistic regression, Naïve Bayes, Random Forest, XGboost, and Catboost classifiers. Each of them scores 92%, 90%, 92%, 91%, 86% respectively.Keywords: cyberbullying, machine learning, Bag-of-Words, term frequency-inverse document frequency, natural language processing, Catboost
Procedia PDF Downloads 13014804 Corporate Governance of Intellectual Capital: The Impact of Intellectual Capital Reporting
Authors: Cesar Julio Recalde
Abstract:
Background: The role of intangible assets in today´s society is undeniable and continuously growing. More than 80% of corporate market is related to intellectual capital(IC). However, corporate governance principles and practices seem strongly based and oriented towards tangible assets. The impact of intangible assets on corporate governance might require prevention and adaptative actions. Adherence to voluntary mechanisms of intellectual capital reporting (ICR) seems to be a gateway towards adapting corporate governance to intangible assets influence and a conceptual cornerstone. The impact of adherence to intellectual capital reporting on corporate governance and performance needs to be evaluated. Purposes: This work has a sequential two folded purpose: (1) exploring the influences exerted by IC on corporate governance theory and practice, and within that context (2) analyzing the impact of adherence to voluntary mechanisms of ICR on corporate governance. Design and summary: This work employs the theory of the firm and agency theory in order to conceptually explore the effects of each dimension of IC on key corporate governance issues, namely property rights and control by shareholders and residual claims by stakeholders, fiduciary duties of management and the board, opportunistic behavior and transparency. A comprehensive IC taxonomy and map is presented. Within the resulting context, internal and external impact of ICR on corporate governance and performance is conceptually analyzed. IRC constraint and barriers are identified. Intellectual liabilities are presented within the context of IRC. Finally, IRC regulatory framework is surveyed. Findings: Relevant conclusions were rendered on the influence of intellectual capital on corporate governance. Sufficient evidence of a positive impact of IRC on corporate governance and performance was found. Additionally, it was found that IRC exerts a leveraging effect on IC itself. Intellectual liabilities are insufficiently researched and seem to have a relevant importance on IC measuring. IRC regulatory framework was found to be insufficiently developed to capture the essence of intangible assets and to meet corporate governance challenges facing IC. Originality: This work develops a progressive approach to conceptually analyze the mutual influences between IC and corporate governance. An epistemic ideogram represents the intersection of analyzed theories. An IC map is presented. The relatively new topic of intellectual liabilities is conceptually analyzed in the context of IRC. Social liabilities and client liabilities are presented.Keywords: corporate governance, intellectual capital, intellectual capital reporting, intellectual assets, intellectual liabilities, voluntary mechanisms, regulatory framework
Procedia PDF Downloads 38614803 A Meta-Analysis of School-Based Suicide Prevention for Adolescents and Meta-Regressions of Contextual and Intervention Factors
Authors: E. H. Walsh, J. McMahon, M. P. Herring
Abstract:
Post-primary school-based suicide prevention (PSSP) is a valuable avenue to reduce suicidal behaviours in adolescents. The aims of this meta-analysis and meta-regression were 1) to quantify the effect of PSSP interventions on adolescent suicide ideation (SI) and suicide attempts (SA), and 2) to explore how intervention effects may vary based on important contextual and intervention factors. This study provides further support to the benefits of PSSP by demonstrating lower suicide outcomes in over 30,000 adolescents following PSSP and mental health interventions and tentatively suggests that intervention effectiveness may potentially vary based on intervention factors. The protocol for this study is registered on PROSPERO (ID=CRD42020168883). Population, intervention, comparison, outcomes, and study design (PICOs) defined eligible studies as cluster randomised studies (n=12) containing PSSP and measuring suicide outcomes. Aggregate electronic database EBSCO host, Web of Science, and Cochrane Central Register of Controlled Trials databases were searched. Cochrane bias tools for cluster randomised studies demonstrated that half of the studies were rated as low risk of bias. The Egger’s Regression Test adapted for multi-level modelling indicated that publication bias was not an issue (all ps > .05). Crude and corresponding adjusted pooled log odds ratios (OR) were computed using the Metafor package in R, yielding 12 SA and 19 SI effects. Multi-level random-effects models accounting for dependencies of effects from the same study revealed that in crude models, compared to controls, interventions were significantly associated with 13% (OR=0.87, 95% confidence interval (CI), [0.78,0.96], Q18 =15.41, p=0.63) and 34% (OR=0.66, 95%CI [0.47,0.91], Q10=16.31, p=0.13) lower odds of SI and SA, respectively. Adjusted models showed similar odds reductions of 15% (OR=0.85, 95%CI[0.75,0.95], Q18=10.04, p=0.93) and 28% (OR=0.72, 95%CI[0.59,0.87], Q10=10.46, p=0.49) for SI and SA, respectively. Within-cluster heterogeneity ranged from no heterogeneity to low heterogeneity for SA across crude and adjusted models (0-9%). No heterogeneity was identified for SI across crude and adjusted models (0%). Pre-specified univariate moderator analyses were not significant for SA (all ps < 0.05). Variations in average pooled SA odds reductions across categories of various intervention characteristics were observed (all ps < 0.05), which preliminarily suggests that the effectiveness of interventions may potentially vary across intervention factors. These findings have practical implications for researchers, clinicians, educators, and decision-makers. Further investigation of important logical, theoretical, and empirical moderators on PSSP intervention effectiveness is recommended to establish how and when PSSP interventions best reduce adolescent suicidal behaviour.Keywords: adolescents, contextual factors, post-primary school-based suicide prevention, suicide ideation, suicide attempts
Procedia PDF Downloads 10414802 Knowledge Capital and Manufacturing Firms’ Innovation Management: Exploring the Impact of Transboundary Investment and Assimilative Capacity.
Authors: Suleman Bawa, Ayiku Emmanuel Lartey
Abstract:
Purpose - This paper aims to examine the association between knowledge capital and multinational firms’ innovation management. We again explored the impact of transboundary investment and assimilative capacity between knowledge capital and multinational firms’ innovation management. The vital position of knowledge capital and multinational firms’ innovation management in today’s increasingly volatile environment coupled with fierce competition has been extensively acknowledged by academics and industry investment capitals. Design/methodology/approach - The theoretical association model and an empirical correlation analysis were constructed based on relevant research using data collected from 19 multinational firms in Ghana as the subject, and path analysis was constructed using SPSS 22.0 and AMOS 24.0 to test the formulated hypotheses. Findings - Varied conclusions are drawn consequential from theoretical inferences and empirical tests. For multinational firms, knowledge capital relics positively significant to multinational firms’ innovation management. Multinational firms with advanced knowledge capital likely spawn greater corporations’ innovation management. Second, transboundary investment efficiently intermediates the association between knowledge physical capital, knowledge interactive capital, and corporations’ innovation management. At the same time, this impact is insignificant between knowledge of empirical capital and corporations’ innovation management. Lastly, the impact of transboundary investment and assimilative capacity on the association between knowledge capital and corporations’ innovation management is established. We summarized the implications for managers based on our outcomes. Research limitations/implications - Multinational firms must dynamically build knowledge capital to augment corporations’ innovation management. Conversely, knowledge capital motivates multinational firms to implement transboundary investment and cultivate assimilative capacity. Accordingly, multinational firms can efficiently exploit diverse information to augment their corporate innovation management. Practical implications – This paper presents a comprehensive justification of knowledge capital and manufacturing firms’ innovation management by exploring the impact of transboundary investment and assimilative capacity within the manufacturing industry, its sequential progress, and its associated challenges. Originality/value – This paper is amongst the first to find empirical results to back knowledge capital and manufacturing firms’ innovation management by exploring the impact of transboundary investment and assimilative capacity within the manufacturing industry. Additionally, aligning knowledge as a coordinative instrument is a significant input to our discernment in this area.Keywords: knowledge capital, transboundary investment, innovation management, assimilative capacity
Procedia PDF Downloads 7814801 Line Heating Forming: Methodology and Application Using Kriging and Fifth Order Spline Formulations
Authors: Henri Champliaud, Zhengkun Feng, Ngan Van Lê, Javad Gholipour
Abstract:
In this article, a method is presented to effectively estimate the deformed shape of a thick plate due to line heating. The method uses a fifth order spline interpolation, with up to C3 continuity at specific points to compute the shape of the deformed geometry. First and second order derivatives over a surface are the resulting parameters of a given heating line on a plate. These parameters are determined through experiments and/or finite element simulations. Very accurate kriging models are fitted to real or virtual surfaces to build-up a database of maps. Maps of first and second order derivatives are then applied on numerical plate models to evaluate their evolving shapes through a sequence of heating lines. Adding an optimization process to this approach would allow determining the trajectories of heating lines needed to shape complex geometries, such as Francis turbine blades.Keywords: deformation, kriging, fifth order spline interpolation, first, second and third order derivatives, C3 continuity, line heating, plate forming, thermal forming
Procedia PDF Downloads 45614800 Hydraulic Analysis of Irrigation Approach Channel Using HEC-RAS Model
Authors: Muluegziabher Semagne Mekonnen
Abstract:
This study was intended to show the irrigation water requirements and evaluation of canal hydraulics steady state conditions to improve on scheme performance of the Meki-Ziway irrigation project. The methodology used was the CROPWAT 8.0 model to estimate the irrigation water requirements of five major crops irrigated in the study area. The results showed that for the whole existing and potential irrigation development area of 2000 ha and 2599 ha, crop water requirements were 3,339,200 and 4,339,090.4 m³, respectively. Hydraulic simulation models are fundamental tools for understanding the hydraulic flow characteristics of irrigation systems. Hydraulic simulation models are fundamental tools for understanding the hydraulic flow characteristics of irrigation systems. In this study Hydraulic Analysis of Irrigation Canals Using HEC-RAS Model was conducted in Meki-Ziway Irrigation Scheme. The HEC-RAS model was tested in terms of error estimation and used to determine canal capacity potential.Keywords: HEC-RAS, irrigation, hydraulic. canal reach, capacity
Procedia PDF Downloads 6014799 Modeling the Impact of Controls on Information System Risks
Authors: M. Ndaw, G. Mendy, S. Ouya
Abstract:
Information system risk management helps to reduce or eliminate risk by implementing appropriate controls. In this paper, we propose a quantification model of controls impact on information system risks by automatizing the residual criticality estimation step of FMECA which is based on a inductive reasoning. For this, we defined three equations based on type and maturity of controls. For testing, the values obtained with the model were compared to estimated values given by interlocutors during different working sessions and the result is satisfactory. This model allows an optimal assessment of controls maturity and facilitates risk analysis of information system.Keywords: information system, risk, control, FMECA method
Procedia PDF Downloads 35514798 Development of Coir Reinforced Composite for Automotive Parts Application
Authors: Okpala Charles Chikwendu, Ezeanyim Okechukwu Chiedu, Onukwuli Somto Kenneth
Abstract:
The demand for lightweight and fuel-efficient automobiles has led to the use of fiber-reinforced polymer composites in place of traditional metal parts. Coir, a natural fiber, offers qualities such as low cost, good tensile strength, and biodegradability, making it a potential filler material for automotive components. However, poor interfacial adhesion between coir and polymeric matrices has been a challenge. To address poor interfacial adhesion with polymeric matrices due to their moisture content and method of preparation, the extracted coir was chemically treated using NaOH. To develop a side view mirror encasement by investigating the mechanical effect of fiber percentage composition, fiber length and percentage composition of Epoxy in a coir fiber reinforced composite, polyester was adopted as the resin for the mold, while that of the product is Epoxy. Coir served as the filler material for the product. Specimens with varied compositions of fiber loading (15, 30 and 45) %, length (10, 15, 20, 30 and 45) mm, and (55, 70, 85) % weight of epoxy resin were fabricated using hand lay-up technique, while those specimens were later subjected to mechanical tests (Tensile, Flexural and Impact test). The results of the mechanical test showed that the optimal solution for the input factors is coir at 45%, epoxy at 54.543%, and 45mm coir length, which was used for the development of a vehicle’s side view mirror encasement. The optimal solutions for the response parameters are 49.333 Mpa for tensile strength, flexural for 57.118 Mpa, impact strength for 34.787 KJ/M2, young modulus for 4.788 GPa, stress for 4.534 KN, and 20.483 mm for strain. The models that were developed using Design Expert software revealed that the input factors can achieve the response parameters in the system with 94% desirability. The study showed that coir is quite durable for filler material in an epoxy composite for automobile applications and that fiber loading and length have a significant effect on the mechanical behavior of coir fiber-reinforced epoxy composites. The coir's low density, considerable tensile strength, and bio-degradability contribute to its eco-friendliness and potential for reducing the environmental hazards of synthetic automotive components.Keywords: coir, composite, coir fiber, coconut husk, polymer, automobile, mechanical test
Procedia PDF Downloads 6414797 Prospects of Acellular Organ Scaffolds for Drug Discovery
Authors: Inna Kornienko, Svetlana Guryeva, Natalia Danilova, Elena Petersen
Abstract:
Drug toxicity often goes undetected until clinical trials, the most expensive and dangerous phase of drug development. Both human cell culture and animal studies have limitations that cannot be overcome by improvements in drug testing protocols. Tissue engineering is an emerging alternative approach to creating models of human malignant tumors for experimental oncology, personalized medicine, and drug discovery studies. This new generation of bioengineered tumors provides an opportunity to control and explore the role of every component of the model system including cell populations, supportive scaffolds, and signaling molecules. An area that could greatly benefit from these models is cancer research. Recent advances in tissue engineering demonstrated that decellularized tissue is an excellent scaffold for tissue engineering. Decellularization of donor organs such as heart, liver, and lung can provide an acellular, naturally occurring three-dimensional biologic scaffold material that can then be seeded with selected cell populations. Preliminary studies in animal models have provided encouraging results for the proof of concept. Decellularized Organs preserve organ microenvironment, which is critical for cancer metastasis. Utilizing 3D tumor models results greater proximity of cell culture morphological characteristics in a model to its in vivo counterpart, allows more accurate simulation of the processes within a functioning tumor and its pathogenesis. 3D models allow study of migration processes and cell proliferation with higher reliability as well. Moreover, cancer cells in a 3D model bear closer resemblance to living conditions in terms of gene expression, cell surface receptor expression, and signaling. 2D cell monolayers do not provide the geometrical and mechanical cues of tissues in vivo and are, therefore, not suitable to accurately predict the responses of living organisms. 3D models can provide several levels of complexity from simple monocultures of cancer cell lines in liquid environment comprised of oxygen and nutrient gradients and cell-cell interaction to more advanced models, which include co-culturing with other cell types, such as endothelial and immune cells. Following this reasoning, spheroids cultivated from one or multiple patient-derived cell lines can be utilized to seed the matrix rather than monolayer cells. This approach furthers the progress towards personalized medicine. As an initial step to create a new ex vivo tissue engineered model of a cancer tumor, optimized protocols have been designed to obtain organ-specific acellular matrices and evaluate their potential as tissue engineered scaffolds for cultures of normal and tumor cells. Decellularized biomatrix was prepared from animals’ kidneys, urethra, lungs, heart, and liver by two decellularization methods: perfusion in a bioreactor system and immersion-agitation on an orbital shaker with the use of various detergents (SDS, Triton X-100) in different concentrations and freezing. Acellular scaffolds and tissue engineered constructs have been characterized and compared using morphological methods. Models using decellularized matrix have certain advantages, such as maintaining native extracellular matrix properties and biomimetic microenvironment for cancer cells; compatibility with multiple cell types for cell culture and drug screening; utilization to culture patient-derived cells in vitro to evaluate different anticancer therapeutics for developing personalized medicines.Keywords: 3D models, decellularization, drug discovery, drug toxicity, scaffolds, spheroids, tissue engineering
Procedia PDF Downloads 30114796 A New Approach to Interval Matrices and Applications
Authors: Obaid Algahtani
Abstract:
An interval may be defined as a convex combination as follows: I=[a,b]={x_α=(1-α)a+αb: α∈[0,1]}. Consequently, we may adopt interval operations by applying the scalar operation point-wise to the corresponding interval points: I ∙J={x_α∙y_α ∶ αϵ[0,1],x_α ϵI ,y_α ϵJ}, With the usual restriction 0∉J if ∙ = ÷. These operations are associative: I+( J+K)=(I+J)+ K, I*( J*K)=( I*J )* K. These two properties, which are missing in the usual interval operations, will enable the extension of the usual linear system concepts to the interval setting in a seamless manner. The arithmetic introduced here avoids such vague terms as ”interval extension”, ”inclusion function”, determinants which we encounter in the engineering literature that deal with interval linear systems. On the other hand, these definitions were motivated by our attempt to arrive at a definition of interval random variables and investigate the corresponding statistical properties. We feel that they are the natural ones to handle interval systems. We will enable the extension of many results from usual state space models to interval state space models. The interval state space model we will consider here is one of the form X_((t+1) )=AX_t+ W_t, Y_t=HX_t+ V_t, t≥0, where A∈ 〖IR〗^(k×k), H ∈ 〖IR〗^(p×k) are interval matrices and 〖W 〗_t ∈ 〖IR〗^k,V_t ∈〖IR〗^p are zero – mean Gaussian white-noise interval processes. This feeling is reassured by the numerical results we obtained in a simulation examples.Keywords: interval analysis, interval matrices, state space model, Kalman Filter
Procedia PDF Downloads 42514795 Intellectual Property and SMEs in the Baltic Sea Region: A Comparative Study on the Use of the Utility Model Protection
Authors: Christina Wainikka, Besrat Tesfaye
Abstract:
Several of the countries in the Baltic Sea region are ranked high in international innovations rankings, such as the Global Innovation Index and European Innovation Scoreboard. There are however some concerns in the performance of different countries. For example, there is a widely spread notion about “The Swedish Paradox”. Sweden is ranked high due to investments in R&D and patent activity, but the outcome is not as high as could be expected. SMEs in Sweden are also below EU average when it comes to registering intellectual property rights such as patents and trademarks. This study is concentrating on the protection of utility model. This intellectual property right does not exist in Sweden, but in for example Finland and Germany. The utility model protection is sometimes referred to as a “patent light” since it is easier to obtain than the patent protection but at the same time does cover technical solutions. In examining statistics on patent activities and activities in registering utility models it is clear that utility model protection is scarcely used in the countries that have the protection. In Germany 10 577 applications were made in 2021. In Finland there were 259 applications made in 2021. This can be compared with patent applications that were 58 568 in Germany in 2021 and 1 662 in Finland in 2021. In Sweden there has never been a protection for utility models. The only protection for technical solutions is patents and business secrets. The threshold for obtaining a patent is high, due to the legal requirements and the costs. The patent protection is there for often not chosen by SMEs in Sweden. This study examines whether the protection of utility models in other countries in the Baltic region provide SMEs in these countries with better options to protect their innovations. The legal methodology is comparative law. In order to study the effects of the legal differences statistics are examined and interviews done with SMEs from different industries.Keywords: baltic sea region, comparative law, SME, utility model
Procedia PDF Downloads 11414794 [Keynote Talk]: Software Reliability Assessment and Fault Tolerance: Issues and Challenges
Authors: T. Gayen
Abstract:
Although, there are several software reliability models existing today there does not exist any versatile model even today which can be used for the reliability assessment of software. Complex software has a large number of states (unlike the hardware) so it becomes practically difficult to completely test the software. Irrespective of the amount of testing one does, sometimes it becomes extremely difficult to assure that the final software product is fault free. The Black Box Software Reliability models are found be quite uncertain for the reliability assessment of various systems. As mission critical applications need to be highly reliable and since it is not always possible to ensure the development of highly reliable system. Hence, in order to achieve fault-free operation of software one develops some mechanism to handle faults remaining in the system even after the development. Although, several such techniques are currently in use to achieve fault tolerance, yet these mechanisms may not always be very suitable for various systems. Hence, this discussion is focused on analyzing the issues and challenges faced with the existing techniques for reliability assessment and fault tolerance of various software systems.Keywords: black box, fault tolerance, failure, software reliability
Procedia PDF Downloads 42614793 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic
Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova
Abstract:
Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification
Procedia PDF Downloads 10814792 Russian Spatial Impersonal Sentence Models in Translation Perspective
Authors: Marina Fomina
Abstract:
The paper focuses on the category of semantic subject within the framework of a functional approach to linguistics. The semantic subject is related to similar notions such as the grammatical subject and the bearer of predicative feature. It is the multifaceted nature of the category of subject that 1) triggers a number of issues that, syntax-wise, remain to be dealt with (cf. semantic vs. syntactic functions / sentence parts vs. parts of speech issues, etc.); 2) results in a variety of approaches to the category of subject, such as formal grammatical, semantic/syntactic (functional), communicative approaches, etc. Many linguists consider the prototypical approach to the category of subject to be the most instrumental as it reveals the integrity of denotative and linguistic components of the conceptual category. This approach relates to subject as a source of non-passive predicative feature, an element of subject-predicate-object situation that can take on a variety of semantic roles, cf.: 1) an agent (He carefully surveyed the valley stretching before him), 2) an experiencer (I feel very bitter about this), 3) a recipient (I received this book as a gift), 4) a causee (The plane broke into three pieces), 5) a patient (This stove cleans easily), etc. It is believed that the variety of roles stems from the radial (prototypical) structure of the category with some members more central than others. Translation-wise, the most “treacherous” subject types are the peripheral ones. The paper 1) features a peripheral status of spatial impersonal sentence models such as U menia v ukhe zvenit (lit. I-Gen. in ear buzzes) within the category of semantic subject, 2) makes a structural and semantic analysis of the models, 3) focuses on their Russian-English translation patterns, 4) reveals non-prototypical features of subjects in the English equivalents.Keywords: bearer of predicative feature, grammatical subject, impersonal sentence model, semantic subject
Procedia PDF Downloads 37014791 Deep Learning Strategies for Mapping Complex Vegetation Patterns in Mediterranean Environments Undergoing Climate Change
Authors: Matan Cohen, Maxim Shoshany
Abstract:
Climatic, topographic and geological diversity, together with frequent disturbance and recovery cycles, produce highly complex spatial patterns of trees, shrubs, dwarf shrubs and bare ground patches. Assessment of spatial and temporal variations of these life-forms patterns under climate change is of high ecological priority. Here we report on one of the first attempts to discriminate between images of three Mediterranean life-forms patterns at three densities. The development of an extensive database of orthophoto images representing these 9 pattern categories was instrumental for training and testing pre-trained and newly-trained DL models utilizing DenseNet architecture. Both models demonstrated the advantages of using Deep Learning approaches over existing spectral and spatial (pattern or texture) algorithmic methods in differentiation 9 life-form spatial mixtures categories.Keywords: texture classification, deep learning, desert fringe ecosystems, climate change
Procedia PDF Downloads 8814790 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI
Authors: Rutej R. Mehta, Michael A. Chappell
Abstract:
Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.Keywords: arterial spin labelling, dispersion, MRI, perfusion
Procedia PDF Downloads 37214789 Impact of Global Warming on the Total Flood Duration and Flood Recession Time in the Meghna Basin Using Hydrodynamic Modelling
Authors: Karan Gupta
Abstract:
The floods cause huge loos each year, and their impact gets manifold with the increase of total duration of flood as well as recession time. Moreover, floods have increased in recent years due to climate change in floodplains. In the context of global climate change, the agreement in Paris convention (2015) stated to keep the increase in global average temperature well below 2°C and keep it at the limit of 1.5°C. Thus, this study investigates the impact of increasing temperature on the stage, discharge as well as total flood duration and recession time in the Meghna River basin in Bangladesh. This study considers the 100-year return period flood flows in the Meghna river under the specific warming levels (SWLs) of 1.5°C, 2°C, and 4°C. The results showed that the rate of increase of duration of flood is nearly 50% lesser at ∆T = 1.5°C as compared to ∆T = 2°C, whereas the rate of increase of duration of recession is 75% lower at ∆T = 1.5°C as compared to ∆T = 2°C. Understanding the change of total duration of flood as well as recession time of the flood gives a better insight to effectively plan for flood mitigation measures.Keywords: flood, climate change, Paris convention, Bangladesh, inundation duration, recession duration
Procedia PDF Downloads 14214788 Reading and Writing Memories in Artificial and Human Reasoning
Authors: Ian O'Loughlin
Abstract:
Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.Keywords: artificial reasoning, human memory, machine learning, neural networks
Procedia PDF Downloads 27114787 Impact of Belongingness, Relational Communication, Religiosity and Screen Time of College Student Levels of Anxiety
Authors: Cherri Kelly Seese, Renee Bourdeaux, Sarah Drivdahl
Abstract:
Emergent adults in the United States are currently experiencing high levels of anxiety. It is imperative to uncover insulating factors which mitigate the impact of anxiety. This study aims to explore how constructs such as belongingness, relational communication, screen time and religiosity impact anxiety levels of emerging adults. Approximately 250 college students from a small, private university on the West Coast were given an online assessment that included: the General Belongingness Scale, Relational Communication Scale, Duke University Religion Index (DUREL), a survey of screen time, and the Beck Anxiety Inventory. A MANOVA statistical test was conducted by assessing the effects of multiple dependent variables (scores on GBS, RCS, self-reported screen time and DUREL) on the four different levels of anxiety as measured on the BAI (minimal = 1, mild =2, moderate = 3, or severe = 4). Results indicated a significant relationship between one’s sense of belonging and one’s reported level of anxiety. These findings have implications for systems, like universities, churches, and corporations that want to improve young adults’ level of anxiety.Keywords: anxiety, belongingness, relational communication, religiosity, screen time
Procedia PDF Downloads 17414786 Investigation of Distortion and Impact Strength of 304 L Butt Joint Using Different Weld Groove
Authors: A. Sharma, S. S. Sandhu, A.Shahi, A. Kumar
Abstract:
In this study, the effects of geometric configurations of butt joints i.e. double V groove, double U groove and UV groove of AISI 304L of thickness 12 mm by using Gas Tungsten Arc Welding (GTAW) are investigated. The magnitude of transverse shrinkage stress and distortion generated during welding under the unrestrained conditions of butt joints is the main objective of the study. The effect of groove design on impact strength and metallurgical properties are also studied. The Finite element analysis for the groove design is done and compared the actual experimentation. The experimental results and the FEM results were compared and reveal a very good correlation for distortion and weld groove design for multipass joint with a standard analogy of 80%. In the case of VV groove design it was found that the transverse stress and cumulative deflection have the lowest value. It was found that the UV groove design had the maximum ultimate and yield tensile strength, VV groove had the highest impact strength. Vicker’s hardness value of all the groove design was measured. Micro structural studies were carried out using conventional microscopic tools which revealed a lot of useful information for correlating the microstructure with mechanical properties.Keywords: weld groove design, distortion, AISI 304 L, butt joint, FEM, GTAW
Procedia PDF Downloads 366