Search results for: biocontrol methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15344

Search results for: biocontrol methods

12104 Underwater Image Enhancement and Reconstruction Using CNN and the MultiUNet Model

Authors: Snehal G. Teli, R. J. Shelke

Abstract:

CNN and MultiUNet models are the framework for the proposed method for enhancing and reconstructing underwater images. Multiscale merging of features and regeneration are both performed by the MultiUNet. CNN collects relevant features. Extensive tests on benchmark datasets show that the proposed strategy performs better than the latest methods. As a result of this work, underwater images can be represented and interpreted in a number of underwater applications with greater clarity. This strategy will advance underwater exploration and marine research by enhancing real-time underwater image processing systems, underwater robotic vision, and underwater surveillance.

Keywords: convolutional neural network, image enhancement, machine learning, multiunet, underwater images

Procedia PDF Downloads 75
12103 Involving Participants at the Methodological Design Stage: The Group Repertory Grid Approach

Authors: Art Tsang

Abstract:

In educational research, the scope of investigations has almost always been determined by researchers. As learners are at the forefront of education, it is essential to balance researchers’ and learners’ voices in educational studies. In this paper, a data collection method that helps partly address the dearth of learners’ voices in research design is introduced. Inspired by the repertory grid approach (RGA), the group RGA approach, created by the author and his doctoral student, was successfully piloted with learners in Hong Kong. This method will very likely be of interest and use to many researchers, teachers, and postgraduate students in the field of education and beyond.

Keywords: education, learners, repertory grids, research methods

Procedia PDF Downloads 59
12102 Extraction of Squalene from Lebanese Olive Oil

Authors: Henri El Zakhem, Christina Romanos, Charlie Bakhos, Hassan Chahal, Jessica Koura

Abstract:

Squalene is a valuable component of the oil composed of 30 carbon atoms and is mainly used for cosmetic materials. The main concern of this article is to study the Squalene composition in the Lebanese olive oil and to compare it with foreign oil results. To our knowledge, extraction of Squalene from the Lebanese olive oil has not been conducted before. Three different techniques were studied and experiments were performed on three brands of olive oil, Al Wadi Al Akhdar, Virgo Bio and Boulos. The techniques performed are the Fractional Crystallization, the Soxhlet and the Esterification. By comparing the results, it is found that the Lebanese oil contains squalene and Soxhlet method is the most effective between the three methods extracting about 6.5E-04 grams of Squalene per grams of olive oil.

Keywords: squalene, extraction, crystallization, Soxhlet

Procedia PDF Downloads 519
12101 Critical Analysis of Different Actuation Techniques for a Micro Cantilever

Authors: B. G. Sheeparamatti, Prashant Hanasi, Vanita Abbigeri

Abstract:

The objective of this work is to carry out a critical comparison of different actuation mechanisms like electrostatic, thermal, piezoelectric, and magnetic with reference to a microcantilever. The relevant parameters like force generated, displacement are compared in actuation methods. With these results, they help in choosing the best actuation method for a particular application. In this study, Comsol/Multiphysics software is used. Modeling and simulation are done by considering the microcantilever of same dimensions as an actuator using all the above-mentioned actuation techniques. In addition to their small size, micro actuators consume very little power and are capable of accurate results. In this work, a comparison of actuation mechanisms is done to decide the efficient system in the micro domain.

Keywords: actuation techniques, microswitch, micro actuator, microsystems

Procedia PDF Downloads 409
12100 Modulation of the Europay, MasterCard, and VisaCard Authentications by Using Avispa Tool

Authors: Ossama Al-Maliki

Abstract:

The Europay, MasterCard, and Visa (EMV) is the transaction protocol for most of the world and especially in Europe and the UK. EMV protocol consists of three main stages which are: card authentication, cardholder verification methods, and transaction authorization. This paper details in full the EMV card authentications. We have used AVISPA and SPAN tools to do our modulization for the EMV card authentications. The code for each type of the card authentication was written by using CAS+ language. The results showed that our modulations were successfully addressed all the steps of the EMV card authentications and the entire process of the EMV card authentication are secured. Also, our modulations were successfully addressed all the main goals behind the EMV card authentications according to the EMV specifications.

Keywords: EMV, card authentication, contactless card, SDA, DDA, CDA AVISPA

Procedia PDF Downloads 178
12099 Therapeutic Touch from Primary Care to Tertiary Care in Health Services

Authors: Ayşegül Bilge, Hacer Demirkol, Merve Uğuryol

Abstract:

Therapeutic touch is one of the most important methods of complementary and alternative treatments. Therapeutic touch requires the sharing of universal energy. Therapeutic touch (TT) provides the interaction between the patient and the nurse. In addition, nurses can be aware of physical and mental symptoms of patients through therapeutic touch. Therapeutic touch (TT) is short-term provides the advantage for the nurse. For this reason, nurses have to be aware of the importance of therapeutic touch and they can use it from the primary care to tertiary care in nursing practices at in health field.

Keywords: health care services, complementary treatment, nursing, therapeutic touch

Procedia PDF Downloads 347
12098 UK GAAP and IFRS Standards: Similarities and Differences

Authors: Feddaoui Amina

Abstract:

This paper aimed to help researchers and international companies to the differences and similarities between IFRS (International financial reporting standards) and UK GAAP or UK accounting principles, and to the accounting changes between standard setting of the International Accounting Standards Board and the Accounting Standards Board in United Kingdom. We will use in this study statistical methods to calculate similarities and difference frequencies between the UK standards and IFRS standards, according to the PricewaterhouseCoopers report in 2005. We will use the one simple test to confirm or refuse our hypothesis. In conclusion, we found that the gap between UK GAAP and IFRS is small.

Keywords: accounting, UK GAAP, IFRS, similarities, differences

Procedia PDF Downloads 210
12097 Gamification as a Tool for Influencing Customers' Behaviour

Authors: Beata Zatwarnicka-Madura

Abstract:

The objective of the article was to identify the impacts of gamification on customers' behaviour. The most important applications of games in marketing and mechanisms of gamification are presented in the article. A detailed analysis of the influence of gamification on customers using two brands, Foursquare and Nike, was also presented. Research studies using auditory survey methods were carried out among 176 young respondents, who are potential targets of gamification. The studies confirmed a huge participation of young people in customer loyalty programs with relatively low participation in other gamification-based marketing activities. The research findings clearly indicate that gamification mechanisms are the most attractive.

Keywords: customer loyalty, games, gamification, social aspects

Procedia PDF Downloads 490
12096 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators

Authors: Guenther Schuh, Michael Riesener, Frederic Diels

Abstract:

Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.

Keywords: agile, highly iterative development, agile-indicator, product development

Procedia PDF Downloads 246
12095 Safety Climate Assessment and Its Impact on the Productivity of Construction Enterprises

Authors: Krzysztof J. Czarnocki, F. Silveira, E. Czarnocka, K. Szaniawska

Abstract:

Research background: Problems related to the occupational health and decreasing level of safety occur commonly in the construction industry. Important factor in the occupational safety in construction industry is scaffold use. All scaffolds used in construction, renovation, and demolition shall be erected, dismantled and maintained in accordance with safety procedure. Increasing demand for new construction projects unfortunately still is linked to high level of occupational accidents. Therefore, it is crucial to implement concrete actions while dealing with scaffolds and risk assessment in construction industry, the way on doing assessment and liability of assessment is critical for both construction workers and regulatory framework. Unfortunately, professionals, who tend to rely heavily on their own experience and knowledge when taking decisions regarding risk assessment, may show lack of reliability in checking the results of decisions taken. Purpose of the article: The aim was to indicate crucial parameters that could be modeling with Risk Assessment Model (RAM) use for improving both building enterprise productivity and/or developing potential and safety climate. The developed RAM could be a benefit for predicting high-risk construction activities and thus preventing accidents occurred based on a set of historical accident data. Methodology/Methods: A RAM has been developed for assessing risk levels as various construction process stages with various work trades impacting different spheres of enterprise activity. This project includes research carried out by teams of researchers on over 60 construction sites in Poland and Portugal, under which over 450 individual research cycles were carried out. The conducted research trials included variable conditions of employee exposure to harmful physical and chemical factors, variable levels of stress of employees and differences in behaviors and habits of staff. Genetic modeling tool has been used for developing the RAM. Findings and value added: Common types of trades, accidents, and accident causes have been explored, in addition to suitable risk assessment methods and criteria. We have found that the initial worker stress level is more direct predictor for developing the unsafe chain leading to the accident rather than the workload, or concentration of harmful factors at the workplace or even training frequency and management involvement.

Keywords: safety climate, occupational health, civil engineering, productivity

Procedia PDF Downloads 318
12094 An Accurate Method for Phylogeny Tree Reconstruction Based on a Modified Wild Dog Algorithm

Authors: Essam Al Daoud

Abstract:

This study solves a phylogeny problem by using modified wild dog pack optimization. The least squares error is considered as a cost function that needs to be minimized. Therefore, in each iteration, new distance matrices based on the constructed trees are calculated and used to select the alpha dog. To test the suggested algorithm, ten homologous genes are selected and collected from National Center for Biotechnology Information (NCBI) databanks (i.e., 16S, 18S, 28S, Cox 1, ITS1, ITS2, ETS, ATPB, Hsp90, and STN). The data are divided into three categories: 50 taxa, 100 taxa and 500 taxa. The empirical results show that the proposed algorithm is more reliable and accurate than other implemented methods.

Keywords: least square, neighbor joining, phylogenetic tree, wild dog pack

Procedia PDF Downloads 320
12093 Performance of the SrSnO₃/SnO₂ Nanocomposite Catalyst on the Photocatalytic Degradation of Dyes

Authors: H. Boucheloukh, N. Aoun, M. Denni, A. Mahrouk, T. Sehili

Abstract:

Perovskite materials with strontium alkaline earth metal have attracted researchers in photocatalysis. Thus, nanocomposite-based strontium has been synthesized by the sol-gel method, calciened at 700 °C, and characterized by different methods such as X-ray difraction (DRX), Fourier transformed infrared (FTIR), and diffuse relectance spectroscopy (DRS). After that, the photocatlytic performance of SrNO3/SnO2 has been tested under sunlight in an aqueous solution for two dyes methylene blue and congo-red. The results reveal that 70% of methylene blue has already been degraded after 45 minutes of exposure to sun light, while 80% of Congo red has been eliminated by adsorption on SrSnO₃/SnO₂ in 120 minutes of contact.

Keywords: congo-red, methylene blue, photocatalysis, perovskite

Procedia PDF Downloads 55
12092 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 144
12091 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 256
12090 Mathematical and Fuzzy Logic in the Interpretation of the Quran

Authors: Morteza Khorrami

Abstract:

The logic as an intellectual infrastructure plays an essential role in the Islamic sciences. Hence, there are a few of the verses of the Holy Quran that their interpretation is not possible due to lack of proper logic. In many verses in the Quran, argument and the respondent has requested from the audience that shows the logic rule is in the Quran. The paper which use a descriptive and analytic method, tries to show the role of logic in understanding of the Quran reasoning methods and display some of Quranic statements with mathematical symbols and point that we can help these symbols for interesting and interpretation and answering to some questions and doubts. In this paper, this problem has been mentioned that the Quran did not use two-valued logic (Aristotelian) in all cases, but the fuzzy logic can also be searched in the Quran.

Keywords: aristotelian logic, fuzzy logic, interpretation, Holy Quran

Procedia PDF Downloads 676
12089 An Analysis of Business Intelligence Requirements in South African Corporates

Authors: Adheesh Budree, Olaf Jacob, Louis CH Fourie, James Njenga, Gabriel D Hoffman

Abstract:

Business Intelligence (BI) is implemented by organisations for many reasons and chief among these is improved data support, decision support and savings. The main purpose of this study is to determine BI requirements and availability within South African organisations. The study addresses the following areas as identified as part of a literature review; assessing BI practices in businesses over a range of industries, sectors and managerial functions, determining the functionality of BI (technologies, architecture and methods). It was found that the overall satisfaction with BI in larger organisations is low due to lack of ability to meet user requirements.

Keywords: business intelligence, business value, data management, South Africa

Procedia PDF Downloads 577
12088 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 242
12087 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy

Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren

Abstract:

Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.

Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment

Procedia PDF Downloads 37
12086 Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material

Authors: Natalia Podkhodova, Olga Sheremeteva, Mariia Soldaeva

Abstract:

Creating favorable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. Psychology research has demonstrated that positive comprehension becomes possible when new information becomes part of student’s subjective experience and when linkages between the attributes of notions and various ways of their presentations can be established. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. The article describes the implementation of a holistic approach to teaching mathematics designed to address the primary challenges of such teaching, specifically, the challenge of students’ comprehension. This approach consists of (1) establishing links between the attributes of a notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience -emotional and value, contextual, procedural, communicative- during the educational process; (3) links between different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modeling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and technology influence understanding of material used in teaching mathematics was the research’s primary goal. The research included an experiment in which 256 secondary school students took part: 142 in the experimental group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics -'Derivative' and 'Trigonometric functions'- was evaluated. Control group participants were taught using traditional methods. Students in the experimental group were taught using the holistic method: under the teacher’s guidance, they carried out problems designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as problems that required the ability to operate with all modes of presentation. The use of the technology that forms inter-subject notions based on linkages between perceptional, real, and conceptual mathematical spaces proved to be of special interest to the students. Results of the experiment were analyzed by presenting students in each of the groups with a final test in each of the studied topics. The test included problems that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion was used to reveal the statistical significance of results (pass-fail the modeling test). A significant difference in results was revealed (p < 0.001), which allowed the authors to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. Also, it was revealed (used Student’s t-test) that the students of the experimental group performed reliably (p = 0.0001) more problems in comparison with those in the control group. The results obtained allow us to conclude that increasing comprehension and assimilation of study material took place as a result of applying implemented methods and techniques.

Keywords: comprehension of mathematical content, holistic approach to teaching mathematics in secondary school, subjective experience, technology of the formation of inter-subject notions

Procedia PDF Downloads 176
12085 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 130
12084 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation

Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya

Abstract:

The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.

Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity

Procedia PDF Downloads 184
12083 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 125
12082 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving

Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco

Abstract:

Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.

Keywords: augmented reality, driving, physiological signals, test platform

Procedia PDF Downloads 141
12081 Reasons and Complexities around Using Alcohol and Other Drugs among Aboriginal People Experiencing Homelessness

Authors: Mandy Wilson, Emma Vieira, Jocelyn Jones, Alice V. Brown, Lindey Andrews, Louise Southalan, Jackie Oakley, Dorothy Bagshaw, Patrick Egan, Laura Dent, Duc Dau, Lucy Spanswick

Abstract:

Alcohol and drug dependency are pertinent issues for those experiencing homelessness. This includes Aboriginal and Torres Strait Islander people, Australia’s traditional owners, living in Perth, Western Australia (WA). Societal narratives around the drivers behind drug and alcohol dependency in Aboriginal communities, particularly those experiencing homelessness, have been biased and unchanging, with little regard for complexity. This can include the idea that Aboriginal people have ‘chosen’ to use alcohol or other drugs without consideration for intergenerational trauma and the trauma of homelessness that may influence their choices. These narratives have flow-on impacts on policies and services that directly impact Aboriginal people experiencing homelessness. In 2021, we commenced a project which aimed to listen to and elevate the voices of 70-90 Aboriginal people experiencing homelessness in Perth. The project is community-driven, led by an Aboriginal Community Controlled Organisation in partnership with a university research institute. A community-ownership group of Aboriginal Elders endorsed the project’s methods, chosen to ensure their suitability for the Aboriginal community. In this paper, we detail these methods, including semi-structured interviews influenced by an Aboriginal yarning approach – an important style of conversation for Aboriginal people which follows cultural protocols; and photovoice – supporting people to share their stories through photography. Through these engagements, we detail the reasons Aboriginal people in Perth shared for using alcohol or other drugs while experiencing homelessness. These included supporting their survival on the streets, managing their mental health, and coping while on the journey to finding support. We also detail why they sought to discontinue alcohol and other drug use, including wanting to reconnect with family and changing priorities. Finally, we share how Aboriginal people experiencing homelessness have said they are impacted by their family’s alcohol and other drug use, including feeling uncomfortable living with a family who is drug and alcohol-dependent and having to care for grandchildren despite their own homelessness. These findings provide a richer understanding of alcohol and drug use for Aboriginal people experiencing homelessness in Perth, shedding light on potential changes to targeted policy and service approaches.

Keywords: Aboriginal and Torres Strait Islander peoples, alcohol and other drugs, homelessness, community-led research

Procedia PDF Downloads 131
12080 Fractional Calculus into Structural Dynamics

Authors: Jorge Lopez

Abstract:

In this work, we introduce fractional calculus in order to study the dynamics of a damped multistory building with some symmetry. Initially we make a review of the dynamics of a free and damped multistory building. Then we introduce those concepts of fractional calculus that will be involved in our study. It has been noticed that fractional calculus provides models with less parameters than those based on classical calculus. In particular, a damped classical oscilator is more naturally described by using fractional derivatives. Accordingly, we model our multistory building as a set of coupled fractional oscillators and compare its dynamics with the results coming from traditional methods.

Keywords: coupled oscillators, fractional calculus, fractional oscillator, structural dynamics

Procedia PDF Downloads 242
12079 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm

Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali

Abstract:

Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.

Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir

Procedia PDF Downloads 265
12078 Critical Investigation on Performance of Polymeric Materials in Rehabilitation of Metallic Components

Authors: Parastou Kharazmi

Abstract:

Failure and leakage of metallic components because of corrosion in infrastructure structures is a considerably problematic and expensive issue and the traditional solution of replacing the component is costly and time-consuming. Rehabilitation techniques by using advanced polymeric materials are an alternative solution towards this problem. This paper provides a summary of analyses on relined rehabilitated metallic samples after exposure in practice and real condition to study the composite material performance when it is exposed to water, heat and chemicals in real condition. The study was carried out by using different test methods such as microscopy, thermal and chemical as well as mechanical analyses.

Keywords: composite, material, rehabilitation, structure

Procedia PDF Downloads 236
12077 Review of Cable Fault Locating Methods and Usage of VLF for Real Cases of High Resistance Fault Locating

Authors: Saadat Ali, Rashid Abdulla Ahmed Alshehhi

Abstract:

Cable faults are always probable and common during or after commissioning, causing significant delays and disrupting power distribution or transmission network, which is intolerable for the utilities&service providers being their reliability and business continuity measures. Therefore, the adoption of rapid localization & rectification methodology is the main concern for them. This paper explores the present techniques available for high voltage cable localization & rectification and which is preferable with regards to easier, faster, and also less harmful to cables. It also provides insight experience of high resistance fault locating by utilization of the Very Low Frequency (VLF) method.

Keywords: faults, VLF, real cases, cables

Procedia PDF Downloads 112
12076 Elevating Healthcare Social Work: Implementing and Evaluating the (Introduction, Subjective, Objective, Assessment, Plan, Summary) Documentation Model

Authors: Shir Daphna-Tekoah, Nurit Eitan-Gutman, Uri Balla

Abstract:

Background: Systemic documentation is essential in social work practice. Collaboration between an institution of higher education and social work health care services enabled adaptation of the medical documentation model of SOAP in the field of social work, by creating the ISOAPS model (Introduction, Subjective, Objective, Assessment, Plan, Summary) model. Aims: The article describes the ISOAPS model and its implementation in the field of social work, as a tool for standardization of documentation and the enhancement of multidisciplinary collaboration. Methods: We examined the changes in standardization using a mixed methods study, both before and after implementation of the model. A review of social workers’ documentation was carried out by medical staff and social workers in the Clalit Healthcare Services, the largest provider of public and semi-private health services in Israel. After implementation of the model, semi-structured qualitative interviews were undertaken. Main findings: The percentage of reviewers who evaluated their documentation as correct increased from 46%, prior to implementation, to 61% after implementation. After implementation, 81% of the social workers noted that their documentation had become standardized. The training process prepared them for the change in documentation and most of them (83%) started using the model on a regular basis. The qualitative data indicate that the use of the ISOAPS model creates uniform documentation, improves standards and is important to teach social work students. Conclusions: The ISOAPS model standardizes documentation and promotes communication between social workers and medical staffs. Implications for practice: In the intricate realm of healthcare, efficient documentation systems are pivotal to ensuring coherent interdisciplinary communication and patient care. The ISOAPS model emerges as a quintessential instrument, meticulously tailored to the nuances of social work documentation. While it extends its utility across the broad spectrum of social work, its specificity is most pronounced in the medical domain. This model not only exemplifies rigorous academic and professional standards but also serves as a testament to the potential of contextualized documentation systems in elevating the overall stature of social work within healthcare. Such a strategic documentation tool can not only streamline the intricate processes inherent in medical social work but also underscore the indispensable role that social workers play in the broader healthcare ecosystem.

Keywords: ISOAPS, professional documentation, medial social-work, social work

Procedia PDF Downloads 70
12075 Automatic Intelligent Analysis of Malware Behaviour

Authors: Hermann Dornhackl, Konstantin Kadletz, Robert Luh, Paul Tavolato

Abstract:

In this paper we describe the use of formal methods to model malware behaviour. The modelling of harmful behaviour rests upon syntactic structures that represent malicious procedures inside malware. The malicious activities are modelled by a formal grammar, where API calls’ components are the terminals and the set of API calls used in combination to achieve a goal are designated non-terminals. The combination of different non-terminals in various ways and tiers make up the attack vectors that are used by harmful software. Based on these syntactic structures a parser can be generated which takes execution traces as input for pattern recognition.

Keywords: malware behaviour, modelling, parsing, search, pattern matching

Procedia PDF Downloads 332