Search results for: derivative and convoluted derivative methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15639

Search results for: derivative and convoluted derivative methods

12099 A Radiofrequency Based Navigation Method for Cooperative Robotic Communities in Surface Exploration Missions

Authors: Francisco J. García-de-Quirós, Gianmarco Radice

Abstract:

When considering small robots working in a cooperative community for Moon surface exploration, navigation and inter-nodes communication aspects become a critical issue for the mission success. For this approach to succeed, it is necessary however to deploy the required infrastructure for the robotic community to achieve efficient self-localization as well as relative positioning and communications between nodes. In this paper, an exploration mission concept in which two cooperative robotic systems co-exist is presented. This paradigm hinges on a community of reference agents that provide support in terms of communication and navigation to a second agent community tasked with exploration goals. The work focuses on the role of the agent community in charge of the overall support and, more specifically, will focus on the positioning and navigation methods implemented in RF microwave bands, which are combined with the communication services. An analysis of the different methods for range and position calculation are presented, as well as the main limiting factors for precision and resolution, such as phase and frequency noise in RF reference carriers and drift mechanisms such as thermal drift and random walk. The effects of carrier frequency instability due to phase noise are categorized in different contributing bands, and the impact of these spectrum regions are considered both in terms of the absolute position and the relative speed. A mission scenario is finally proposed, and key metrics in terms of mass and power consumption for the required payload hardware are also assessed. For this purpose, an application case involving an RF communication network in UHF Band is described, in coexistence with a communications network used for the single agents to communicate within the both the exploring agents as well as the community and with the mission support agents. The proposed approach implements a substantial improvement in planetary navigation since it provides self-localization capabilities for robotic agents characterized by very low mass, volume and power budgets, thus enabling precise navigation capabilities to agents of reduced dimensions. Furthermore, a common and shared localization radiofrequency infrastructure enables new interaction mechanisms such as spatial arrangement of agents over the area of interest for distributed sensing.

Keywords: cooperative robotics, localization, robot navigation, surface exploration

Procedia PDF Downloads 294
12098 Investigation and Perfection of Centrifugal Compressor Stages by CFD Methods

Authors: Y. Galerkin, L. Marenina

Abstract:

Stator elements «Vane diffuser + crossover + return channel» of stages with different specific speed were investigated by CFD calculations. The regime parameter was introduced to present efficiency and loss coefficient performance of all elements together. Flow structure demonstrated advantages and disadvantages of design. Flow separation in crossovers was eliminated by its shape modification. Efficiency increased visibly. Calculated CFD performances are in acceptable correlation with predicted ones by engineering design method. The information obtained is useful for design method better calibration.

Keywords: vane diffuser, return channel, crossover, efficiency, loss coefficient, inlet flow angle

Procedia PDF Downloads 431
12097 A Low Power Consumption Routing Protocol Based on a Meta-Heuristics

Authors: Kaddi Mohammed, Benahmed Khelifa D. Benatiallah

Abstract:

A sensor network consists of a large number of sensors deployed in areas to monitor and communicate with each other through a wireless medium. The collected routing data in the network consumes most of the energy of the sensor nodes. For this purpose, multiple routing approaches have been proposed to conserve energy resource at the sensors and to overcome the challenges of its limitation. In this work, we propose a new low energy consumption routing protocol for wireless sensor networks based on a meta-heuristic methods. Our protocol is to operate more fairly energy when routing captured data to the base station.

Keywords: WSN, routing, energy, heuristic

Procedia PDF Downloads 343
12096 Airliner-UAV Flight Formation in Climb Regime

Authors: Pavel Zikmund, Robert Popela

Abstract:

Extreme formation is a theoretical concept of self-sustain flight when a big Airliner is followed by a small UAV glider flying in airliner’s wake vortex. The paper presents results of climb analysis with a goal to lift the gliding UAV to airliner’s cruise altitude. Wake vortex models, the UAV drag polar and basic parameters and airliner’s climb profile are introduced at first. Then, flight performance of the UAV in the wake vortex is evaluated by analytical methods. Time history of optimal distance between the airliner and the UAV during the climb is determined. The results are encouraging, therefore available UAV drag margin for electricity generation is figured out for different vortex models.

Keywords: flight in formation, self-sustained flight, UAV, wake vortex

Procedia PDF Downloads 441
12095 Performance Analysis of MATLAB Solvers in the Case of a Quadratic Programming Generation Scheduling Optimization Problem

Authors: Dávid Csercsik, Péter Kádár

Abstract:

In the case of the proposed method, the problem is parallelized by considering multiple possible mode of operation profiles, which determine the range in which the generators operate in each period. For each of these profiles, the optimization is carried out independently, and the best resulting dispatch is chosen. For each such profile, the resulting problem is a quadratic programming (QP) problem with a potentially negative definite Q quadratic term, and constraints depending on the actual operation profile. In this paper we analyze the performance of available MATLAB optimization methods and solvers for the corresponding QP.

Keywords: optimization, MATLAB, quadratic programming, economic dispatch

Procedia PDF Downloads 549
12094 Increasing Access to Upper Limb Reconstruction in Cervical Spinal Cord Injury

Authors: Michelle Jennett, Jana Dengler, Maytal Perlman

Abstract:

Background: Cervical spinal cord injury (SCI) is a devastating event that results in upper limb paralysis, loss of independence, and disability. People living with cervical SCI have identified improvement of upper limb function as a top priority. Nerve and tendon transfer surgery has successfully restored upper limb function in cervical SCI but is not universally used or available to all eligible individuals. This exploratory mixed-methods study used an implementation science approach to better understand these factors that influence access to upper limb reconstruction in the Canadian context and design an intervention to increase access to care. Methods: Data from the Canadian Institute for Health Information’s Discharge Abstracts Database (CIHI-DAD) and the National Ambulatory Care Reporting System (NACRS) were used to determine the annual rate of nerve transfer and tendon transfer surgeries performed in cervical SCI in Canada over the last 15 years. Semi-structured interviews informed by the consolidated framework for implementation research (CFIR) were used to explore Ontario healthcare provider knowledge and practices around upper limb reconstruction. An inductive, iterative constant comparative process involving descriptive and interpretive analyses was used to identify themes that emerged from the data. Results: Healthcare providers (n = 10 upper extremity surgeons, n = 10 SCI physiatrists, n = 12 physical and occupational therapists working with individuals with SCI) were interviewed about their knowledge and perceptions of upper limb reconstruction and their current practices and discussions around upper limb reconstruction. Data analysis is currently underway and will be presented. Regional variation in rates of upper limb reconstruction and trends over time are also currently being analyzed. Conclusions: Utilization of nerve and tendon transfer surgery to improve upper limb reconstruction in Canada remains low. There are a complex array of interrelated individual-, provider- and system-level barriers that prevent individuals with cervical SCI from accessing upper limb reconstruction. In order to offer equitable access to care, a multi-modal approach addressing current barriers is required.

Keywords: cervical spinal cord injury, nerve and tendon transfer surgery, spinal cord injury, upper extremity reconstruction

Procedia PDF Downloads 97
12093 Future Research on the Resilience of Tehran’s Urban Areas Against Pandemic Crises Horizon 2050

Authors: Farzaneh Sasanpour, Saeed Amini Varaki

Abstract:

Resilience is an important goal for cities as urban areas face an increasing range of challenges in the 21st century; therefore, according to the characteristics of risks, adopting an approach that responds to sensitive conditions in the risk management process is the resilience of cities. In the meantime, most of the resilience assessments have dealt with natural hazards and less attention has been paid to pandemics.In the covid-19 pandemic, the country of Iran and especially the metropolis of Tehran, was not immune from the crisis caused by its effects and consequences and faced many challenges. One of the methods that can increase the resilience of Tehran's metropolis against possible crises in the future is future studies. This research is practical in terms of type. The general pattern of the research will be descriptive-analytical and from the point of view that it is trying to communicate between the components and provide urban resilience indicators with pandemic crises and explain the scenarios, its future studies method is exploratory. In order to extract and determine the key factors and driving forces effective on the resilience of Tehran's urban areas against pandemic crises (Covid-19), the method of structural analysis of mutual effects and Micmac software was used. Therefore, the primary factors and variables affecting the resilience of Tehran's urban areas were set in 5 main factors, including physical-infrastructural (transportation, spatial and physical organization, streets and roads, multi-purpose development) with 39 variables based on mutual effects analysis. Finally, key factors and variables in five main areas, including managerial-institutional with five variables; Technology (intelligence) with 3 variables; economic with 2 variables; socio-cultural with 3 variables; and physical infrastructure, were categorized with 7 variables. These factors and variables have been used as key factors and effective driving forces on the resilience of Tehran's urban areas against pandemic crises (Covid-19), in explaining and developing scenarios. In order to develop the scenarios for the resilience of Tehran's urban areas against pandemic crises (Covid-19), intuitive logic, scenario planning as one of the future research methods and the Global Business Network (GBN) model were used. Finally, four scenarios have been drawn and selected with a creative method using the metaphor of weather conditions, which is indicative of the general outline of the conditions of the metropolis of Tehran in that situation. Therefore, the scenarios of Tehran metropolis were obtained in the form of four scenarios: 1- solar scenario (optimal governance and management leading in smart technology) 2- cloud scenario (optimal governance and management following in intelligent technology) 3- dark scenario (optimal governance and management Unfavorable leader in intelligence technology) 4- Storm scenario (unfavorable governance and management of follower in intelligence technology). The solar scenario shows the best situation and the stormy scenario shows the worst situation for the Tehran metropolis. According to the findings obtained in this research, city managers can, in order to achieve a better tomorrow for the metropolis of Tehran, in all the factors and components of urban resilience against pandemic crises by using future research methods, a coherent picture with the long-term horizon of 2050, from the path Provide urban resilience movement and platforms for upgrading and increasing the capacity to deal with the crisis. To create the necessary platforms for the realization, development and evolution of the urban areas of Tehran in a way that guarantees long-term balance and stability in all dimensions and levels.

Keywords: future research, resilience, crisis, pandemic, covid-19, Tehran

Procedia PDF Downloads 67
12092 Legal Analysis of the Meaning of the Rule In dubio pro libertate for the Interpretation of Criminal Law Norms

Authors: Pavel Kotlán

Abstract:

The paper defines the role of the rule in dubio pro libertate in the interpretation of criminal law norms, which is one of the controversial and debated problems of law application. On the basis of the analysis of the law, including comparison with the legal systems of various European countries, and the accepted principles of interpretation of law, it can be concluded that the rule in dubio pro libertate can be used in cases where the linguistic, teleological and systematic methods fail, and at the same time, that interpretation based on this rule should be preferred to subjective historical interpretation. It can be considered that the correct inclusion of the in dubio pro libertate rule in the choice of the interpretative variant can serve in the application of criminal law by the judiciary.

Keywords: application of law, criminal law norms, in dubio pro libertate, interpretation

Procedia PDF Downloads 5
12091 The Influence of Production Hygiene Training on Farming Practices Employed by Rural Small-Scale Organic Farmers - South Africa

Authors: Mdluli Fezile, Schmidt Stefan, Thamaga-Chitja Joyce

Abstract:

In view of the frequently reported foodborne disease outbreaks caused by contaminated fresh produce, consumers have a preference for foods that meet requisite hygiene standards to reduce the risk of foodborne illnesses. Producing good quality fresh produce then becomes critical in improving market access and food security, especially for small-scale farmers. Questions of hygiene and subsequent microbiological quality in the rural small-scale farming sector of South Africa are even more crucial, given the policy drive to develop small-scale farming as a measure for reinforcement of household food security and reduction of poverty. Farming practices and methods, throughout the fresh produce value chain, influence the quality of the final product, which in turn determines its success in the market. This study’s aim was to therefore determine the extent to which training on organic farming methods, including modules such as Importance of Production Hygiene, influenced the hygienic farming practices employed by eTholeni small-scale organic farmers in uMbumbulu, KwaZulu-Natal- South Africa. Questionnaires were administered to 73 uncertified organic farmers and analysis showed that a total of 33 farmers were trained and supplied the local Agri-Hub while 40 had not received training. The questionnaire probed respondents’ attitudes, knowledge of hygiene and composting practices. Data analysis included descriptive statistics such as the Chi-square test and a logistic regression model. Descriptive analysis indicated that a majority of the farmers (60%) were female, most of which (73%) were above the age of 40. The logistic regression indicated that factors such as farmer training and prior experience in the farming sector had a significant influence on hygiene practices both at 5% significance levels. These results emphasize the importance of training, education and farming experience in implementing good hygiene practices in small-scale farming. It is therefore recommended that South African policies should advocate for small-scale farmer training, not only for subsistence purposes, but also with an aim of supplying produce markets with high fresh produce.

Keywords: small-scale farmers, leafy salad vegetables, organic produce, food safety, hygienic practices, food security

Procedia PDF Downloads 425
12090 Underwater Image Enhancement and Reconstruction Using CNN and the MultiUNet Model

Authors: Snehal G. Teli, R. J. Shelke

Abstract:

CNN and MultiUNet models are the framework for the proposed method for enhancing and reconstructing underwater images. Multiscale merging of features and regeneration are both performed by the MultiUNet. CNN collects relevant features. Extensive tests on benchmark datasets show that the proposed strategy performs better than the latest methods. As a result of this work, underwater images can be represented and interpreted in a number of underwater applications with greater clarity. This strategy will advance underwater exploration and marine research by enhancing real-time underwater image processing systems, underwater robotic vision, and underwater surveillance.

Keywords: convolutional neural network, image enhancement, machine learning, multiunet, underwater images

Procedia PDF Downloads 77
12089 Involving Participants at the Methodological Design Stage: The Group Repertory Grid Approach

Authors: Art Tsang

Abstract:

In educational research, the scope of investigations has almost always been determined by researchers. As learners are at the forefront of education, it is essential to balance researchers’ and learners’ voices in educational studies. In this paper, a data collection method that helps partly address the dearth of learners’ voices in research design is introduced. Inspired by the repertory grid approach (RGA), the group RGA approach, created by the author and his doctoral student, was successfully piloted with learners in Hong Kong. This method will very likely be of interest and use to many researchers, teachers, and postgraduate students in the field of education and beyond.

Keywords: education, learners, repertory grids, research methods

Procedia PDF Downloads 59
12088 Extraction of Squalene from Lebanese Olive Oil

Authors: Henri El Zakhem, Christina Romanos, Charlie Bakhos, Hassan Chahal, Jessica Koura

Abstract:

Squalene is a valuable component of the oil composed of 30 carbon atoms and is mainly used for cosmetic materials. The main concern of this article is to study the Squalene composition in the Lebanese olive oil and to compare it with foreign oil results. To our knowledge, extraction of Squalene from the Lebanese olive oil has not been conducted before. Three different techniques were studied and experiments were performed on three brands of olive oil, Al Wadi Al Akhdar, Virgo Bio and Boulos. The techniques performed are the Fractional Crystallization, the Soxhlet and the Esterification. By comparing the results, it is found that the Lebanese oil contains squalene and Soxhlet method is the most effective between the three methods extracting about 6.5E-04 grams of Squalene per grams of olive oil.

Keywords: squalene, extraction, crystallization, Soxhlet

Procedia PDF Downloads 519
12087 Critical Analysis of Different Actuation Techniques for a Micro Cantilever

Authors: B. G. Sheeparamatti, Prashant Hanasi, Vanita Abbigeri

Abstract:

The objective of this work is to carry out a critical comparison of different actuation mechanisms like electrostatic, thermal, piezoelectric, and magnetic with reference to a microcantilever. The relevant parameters like force generated, displacement are compared in actuation methods. With these results, they help in choosing the best actuation method for a particular application. In this study, Comsol/Multiphysics software is used. Modeling and simulation are done by considering the microcantilever of same dimensions as an actuator using all the above-mentioned actuation techniques. In addition to their small size, micro actuators consume very little power and are capable of accurate results. In this work, a comparison of actuation mechanisms is done to decide the efficient system in the micro domain.

Keywords: actuation techniques, microswitch, micro actuator, microsystems

Procedia PDF Downloads 409
12086 Modulation of the Europay, MasterCard, and VisaCard Authentications by Using Avispa Tool

Authors: Ossama Al-Maliki

Abstract:

The Europay, MasterCard, and Visa (EMV) is the transaction protocol for most of the world and especially in Europe and the UK. EMV protocol consists of three main stages which are: card authentication, cardholder verification methods, and transaction authorization. This paper details in full the EMV card authentications. We have used AVISPA and SPAN tools to do our modulization for the EMV card authentications. The code for each type of the card authentication was written by using CAS+ language. The results showed that our modulations were successfully addressed all the steps of the EMV card authentications and the entire process of the EMV card authentication are secured. Also, our modulations were successfully addressed all the main goals behind the EMV card authentications according to the EMV specifications.

Keywords: EMV, card authentication, contactless card, SDA, DDA, CDA AVISPA

Procedia PDF Downloads 178
12085 Therapeutic Touch from Primary Care to Tertiary Care in Health Services

Authors: Ayşegül Bilge, Hacer Demirkol, Merve Uğuryol

Abstract:

Therapeutic touch is one of the most important methods of complementary and alternative treatments. Therapeutic touch requires the sharing of universal energy. Therapeutic touch (TT) provides the interaction between the patient and the nurse. In addition, nurses can be aware of physical and mental symptoms of patients through therapeutic touch. Therapeutic touch (TT) is short-term provides the advantage for the nurse. For this reason, nurses have to be aware of the importance of therapeutic touch and they can use it from the primary care to tertiary care in nursing practices at in health field.

Keywords: health care services, complementary treatment, nursing, therapeutic touch

Procedia PDF Downloads 347
12084 UK GAAP and IFRS Standards: Similarities and Differences

Authors: Feddaoui Amina

Abstract:

This paper aimed to help researchers and international companies to the differences and similarities between IFRS (International financial reporting standards) and UK GAAP or UK accounting principles, and to the accounting changes between standard setting of the International Accounting Standards Board and the Accounting Standards Board in United Kingdom. We will use in this study statistical methods to calculate similarities and difference frequencies between the UK standards and IFRS standards, according to the PricewaterhouseCoopers report in 2005. We will use the one simple test to confirm or refuse our hypothesis. In conclusion, we found that the gap between UK GAAP and IFRS is small.

Keywords: accounting, UK GAAP, IFRS, similarities, differences

Procedia PDF Downloads 211
12083 Gamification as a Tool for Influencing Customers' Behaviour

Authors: Beata Zatwarnicka-Madura

Abstract:

The objective of the article was to identify the impacts of gamification on customers' behaviour. The most important applications of games in marketing and mechanisms of gamification are presented in the article. A detailed analysis of the influence of gamification on customers using two brands, Foursquare and Nike, was also presented. Research studies using auditory survey methods were carried out among 176 young respondents, who are potential targets of gamification. The studies confirmed a huge participation of young people in customer loyalty programs with relatively low participation in other gamification-based marketing activities. The research findings clearly indicate that gamification mechanisms are the most attractive.

Keywords: customer loyalty, games, gamification, social aspects

Procedia PDF Downloads 490
12082 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators

Authors: Guenther Schuh, Michael Riesener, Frederic Diels

Abstract:

Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.

Keywords: agile, highly iterative development, agile-indicator, product development

Procedia PDF Downloads 246
12081 Safety Climate Assessment and Its Impact on the Productivity of Construction Enterprises

Authors: Krzysztof J. Czarnocki, F. Silveira, E. Czarnocka, K. Szaniawska

Abstract:

Research background: Problems related to the occupational health and decreasing level of safety occur commonly in the construction industry. Important factor in the occupational safety in construction industry is scaffold use. All scaffolds used in construction, renovation, and demolition shall be erected, dismantled and maintained in accordance with safety procedure. Increasing demand for new construction projects unfortunately still is linked to high level of occupational accidents. Therefore, it is crucial to implement concrete actions while dealing with scaffolds and risk assessment in construction industry, the way on doing assessment and liability of assessment is critical for both construction workers and regulatory framework. Unfortunately, professionals, who tend to rely heavily on their own experience and knowledge when taking decisions regarding risk assessment, may show lack of reliability in checking the results of decisions taken. Purpose of the article: The aim was to indicate crucial parameters that could be modeling with Risk Assessment Model (RAM) use for improving both building enterprise productivity and/or developing potential and safety climate. The developed RAM could be a benefit for predicting high-risk construction activities and thus preventing accidents occurred based on a set of historical accident data. Methodology/Methods: A RAM has been developed for assessing risk levels as various construction process stages with various work trades impacting different spheres of enterprise activity. This project includes research carried out by teams of researchers on over 60 construction sites in Poland and Portugal, under which over 450 individual research cycles were carried out. The conducted research trials included variable conditions of employee exposure to harmful physical and chemical factors, variable levels of stress of employees and differences in behaviors and habits of staff. Genetic modeling tool has been used for developing the RAM. Findings and value added: Common types of trades, accidents, and accident causes have been explored, in addition to suitable risk assessment methods and criteria. We have found that the initial worker stress level is more direct predictor for developing the unsafe chain leading to the accident rather than the workload, or concentration of harmful factors at the workplace or even training frequency and management involvement.

Keywords: safety climate, occupational health, civil engineering, productivity

Procedia PDF Downloads 318
12080 An Accurate Method for Phylogeny Tree Reconstruction Based on a Modified Wild Dog Algorithm

Authors: Essam Al Daoud

Abstract:

This study solves a phylogeny problem by using modified wild dog pack optimization. The least squares error is considered as a cost function that needs to be minimized. Therefore, in each iteration, new distance matrices based on the constructed trees are calculated and used to select the alpha dog. To test the suggested algorithm, ten homologous genes are selected and collected from National Center for Biotechnology Information (NCBI) databanks (i.e., 16S, 18S, 28S, Cox 1, ITS1, ITS2, ETS, ATPB, Hsp90, and STN). The data are divided into three categories: 50 taxa, 100 taxa and 500 taxa. The empirical results show that the proposed algorithm is more reliable and accurate than other implemented methods.

Keywords: least square, neighbor joining, phylogenetic tree, wild dog pack

Procedia PDF Downloads 320
12079 Performance of the SrSnO₃/SnO₂ Nanocomposite Catalyst on the Photocatalytic Degradation of Dyes

Authors: H. Boucheloukh, N. Aoun, M. Denni, A. Mahrouk, T. Sehili

Abstract:

Perovskite materials with strontium alkaline earth metal have attracted researchers in photocatalysis. Thus, nanocomposite-based strontium has been synthesized by the sol-gel method, calciened at 700 °C, and characterized by different methods such as X-ray difraction (DRX), Fourier transformed infrared (FTIR), and diffuse relectance spectroscopy (DRS). After that, the photocatlytic performance of SrNO3/SnO2 has been tested under sunlight in an aqueous solution for two dyes methylene blue and congo-red. The results reveal that 70% of methylene blue has already been degraded after 45 minutes of exposure to sun light, while 80% of Congo red has been eliminated by adsorption on SrSnO₃/SnO₂ in 120 minutes of contact.

Keywords: congo-red, methylene blue, photocatalysis, perovskite

Procedia PDF Downloads 55
12078 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 145
12077 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 256
12076 Mathematical and Fuzzy Logic in the Interpretation of the Quran

Authors: Morteza Khorrami

Abstract:

The logic as an intellectual infrastructure plays an essential role in the Islamic sciences. Hence, there are a few of the verses of the Holy Quran that their interpretation is not possible due to lack of proper logic. In many verses in the Quran, argument and the respondent has requested from the audience that shows the logic rule is in the Quran. The paper which use a descriptive and analytic method, tries to show the role of logic in understanding of the Quran reasoning methods and display some of Quranic statements with mathematical symbols and point that we can help these symbols for interesting and interpretation and answering to some questions and doubts. In this paper, this problem has been mentioned that the Quran did not use two-valued logic (Aristotelian) in all cases, but the fuzzy logic can also be searched in the Quran.

Keywords: aristotelian logic, fuzzy logic, interpretation, Holy Quran

Procedia PDF Downloads 677
12075 An Analysis of Business Intelligence Requirements in South African Corporates

Authors: Adheesh Budree, Olaf Jacob, Louis CH Fourie, James Njenga, Gabriel D Hoffman

Abstract:

Business Intelligence (BI) is implemented by organisations for many reasons and chief among these is improved data support, decision support and savings. The main purpose of this study is to determine BI requirements and availability within South African organisations. The study addresses the following areas as identified as part of a literature review; assessing BI practices in businesses over a range of industries, sectors and managerial functions, determining the functionality of BI (technologies, architecture and methods). It was found that the overall satisfaction with BI in larger organisations is low due to lack of ability to meet user requirements.

Keywords: business intelligence, business value, data management, South Africa

Procedia PDF Downloads 577
12074 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 242
12073 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy

Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren

Abstract:

Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.

Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment

Procedia PDF Downloads 37
12072 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 130
12071 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation

Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya

Abstract:

The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.

Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity

Procedia PDF Downloads 184
12070 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 125