Search results for: explicit algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4073

Search results for: explicit algorithm

1973 Prescription of Lubricating Eye Drops in the Emergency Eye Department: A Quality Improvement Project

Authors: Noorulain Khalid, Unsaar Hayat, Muhammad Chaudhary, Christos Iosifidis, Felipe Dhawahir-Scala, Fiona Carley

Abstract:

Dry eye disease (DED) is a common condition seen in the emergency eye department (EED) at Manchester Royal Eye Hospital (MREH). However, there is variability in the prescription of lubricating eye drops among different healthcare providers. The aim of this study was to develop an up-to-date, standardized algorithm for the prescription of lubricating eye drops in the EED at MREH based on international and national guidelines. The study also aimed to assess the impact of implementing the guideline on the rate of inappropriate lubricant prescriptions. Primarily, the impact was to be assessed in the form of the appropriateness of prescriptions for patients’ DED. The impact was secondary to be assessed through analysis of the cost to the hospital. Data from 845 patients who attended the EED over a 3-month period were analyzed, and 157 patients met the inclusion and exclusion criteria. After conducting a review of the literature and collaborating with the corneal team, an algorithm for the prescription of lubricants in the EED was developed. Three plan-do-study-act (PDSA) cycles were conducted, with interventions such as emails, posters, in-person reminders, and education for incoming trainees. The appropriateness of prescriptions was evaluated against the guidelines. Data were collected from patient records and analyzed using statistical methods. The appropriateness of prescriptions was assessed by comparing them to the guidelines and by clinical correlation with a specialized registrar. The study found a substantial improvement in the number of appropriate prescriptions, with an increase from 55% to 93% over the three PDSA cycles. There was additionally a 51% reduction in expenditure on lubricant prescriptions, resulting in cost savings for the hospital (approximate saving of £50/week). Theoretical importance: Appropriate prescription of lubricating eye drops improves disease management for patients and reduces costs for the hospital. The development and implementation of a standardized guideline facilitate the achievement of these goals. Conclusion: This study highlights the inconsistent management of DED in the EED and the potential lack of training in this area for healthcare providers. The implementation of a standardized, easy-to-follow guideline for lubricating eye drops can help to improve disease management while also resulting in cost savings for the hospital.

Keywords: lubrication, dry eye disease, guideline, prescription

Procedia PDF Downloads 78
1972 A Task Scheduling Algorithm in Cloud Computing

Authors: Ali Bagherinia

Abstract:

Efficient task scheduling method can meet users' requirements, and improve the resource utilization, then increase the overall performance of the cloud computing environment. Cloud computing has new features, such as flexibility, virtualization and etc., in this paper we propose a two levels task scheduling method based on load balancing in cloud computing. This task scheduling method meet user's requirements and get high resource utilization, that simulation results in CloudSim simulator prove this.

Keywords: cloud computing, task scheduling, virtualization, SLA

Procedia PDF Downloads 406
1971 Securing Mobile Ad-Hoc Network Utilizing OPNET Simulator

Authors: Tariq A. El Shheibia, Halima Mohamed Belhamad

Abstract:

This paper is considered securing data based on multi-path protocol (SDMP) in mobile ad hoc network utilizing OPNET simulator modular 14.5, including the AODV routing protocol at the network as based multi-path algorithm for message security in MANETs. The main idea of this work is to present a way that is able to detect the attacker inside the MANETs. The detection for this attacker will be performed by adding some effective parameters to the network.

Keywords: MANET, AODV, malicious node, OPNET

Procedia PDF Downloads 301
1970 Deep Q-Network for Navigation in Gazebo Simulator

Authors: Xabier Olaz Moratinos

Abstract:

Drone navigation is critical, particularly during the initial phases, such as the initial ascension, where pilots may fail due to strong external interferences that could potentially lead to a crash. In this ongoing work, a drone has been successfully trained to perform an ascent of up to 6 meters at speeds with external disturbances pushing it up to 24 mph, with the DQN algorithm managing external forces affecting the system. It has been demonstrated that the system can control its height, position, and stability in all three axes (roll, pitch, and yaw) throughout the process. The learning process is carried out in the Gazebo simulator, which emulates interferences, while ROS is used to communicate with the agent.

Keywords: machine learning, DQN, Gazebo, navigation

Procedia PDF Downloads 83
1969 Effectiveness of a Sports Nutrition Intervention for High-School Athletes: A Feasibility Study

Authors: Michael Ryan, Rosemary E. Borgerding, Kimberly L. Oliver

Abstract:

The objective of this study was to assess the effectiveness of a sports nutrition intervention on body composition in high-school athletes. The study aimed to improve the food and water intake of high-school athletes, evaluate the cost-effectiveness of the intervention, and assess changes in body fat. Data were collected through observations, questionnaires, and interviews. Additionally, bioelectrical impedance analysis was performed to assess the body composition of athletes both before and after the intervention. Athletes (n=25) participated in researcher-monitored training sessions three times a week over the course of 12 weeks. During these sessions, in addition to completing their auxiliary sports training, participants were exposed to educational interventions aimed at improving their nutrition. These included discussions regarding current eating habits, nutritional guidelines for athletes, and individualized recommendations. Food was also made available to athletes for consumption before and after practice. Meals of balanced macronutrient composition were prepared and provided to athletes on four separate occasions throughout the intervention, either prior to or following a competitive event such as a tournament or game. A paired t-test was used to determine the statistical significance of the changes in body fat percentage. The results showed that there was a statistically significant difference between pre and post-intervention body fat percentage (p= .006). Cohen's d of 0.603 was calculated, indicating a moderate effect size. In conclusion, this study provides evidence that a sports nutrition intervention that combines food availability, explicit prescription, and education can be effective in improving the body composition of high-school athletes. However, it's worth noting that this study had a small sample size, and the conclusions cannot be generalized to a larger population. Further research is needed to assess the scalability of this study. This preliminary study demonstrated the feasibility of this type of nutritional intervention and laid the groundwork for a larger, more extensive study to be conducted in the future.

Keywords: bioelectrical impedance, body composition, high-school athletes, sports nutrition, sports pedagogy

Procedia PDF Downloads 97
1968 Dynamic Communications Mapping in NoC-Based Heterogeneous MPSoCs

Authors: M. K. Benhaoua, A. K. Singh, A. E. H. Benyamina

Abstract:

In this paper, we propose heuristic for dynamic communications mapping that considers the placement of communications in order to optimize the overall performance. The mapping technique uses a newly proposed Algorithm to place communications between the tasks. The placement we propose of the communications leads to a better optimization of several performance metrics (time and energy consumption). Experimental results show that the proposed mapping approach provides significant performance improvements when compared to those using static routing.

Keywords: Multi-Processor Systems-on-Chip (MPSoCs), Network-on-Chip (NoC), heterogeneous architectures, dynamic mapping heuristics

Procedia PDF Downloads 540
1967 Developing and Enacting a Model for Institutional Implementation of the Humanizing Pedagogy: Case Study of Nelson Mandela University

Authors: Mukhtar Raban

Abstract:

As part of Nelson Mandela University’s journey of repositioning its learning and teaching agenda, the university adopted and foregrounded a humanizing pedagogy-aligning with institutional goals of critically transforming the academic project. The university established the Humanizing Pedagogy Praxis and Research Niche (HPPRN) as a centralized hub for coordinating institutional work exploring and advancing humanizing pedagogies and tasked the unit with developing and enacting a model for humanizing pedagogy exploration. This investigation endeavored to report on the development and enactment of a model that sought to institutionalize a humanizing pedagogy at a South African university. Having followed a qualitative approach, the investigation presents the case study of Nelson Mandela University’s HPPRN and the model it subsequently established and enacted for the advancement towards a more common institutional understanding, interpretation and application of the humanizing pedagogy. The study adopted an interpretive lens for analysis, complementing the qualitative approach of the investigation. The primary challenge that confronted the HPPRN was the development of a ‘living model’ that had to complement existing institutional initiatives while accommodating a renewed spirit of critical reflection, innovation and research of continued and new humanizing pedagogical exploration and applications. The study found that the explicit consideration of tenets of humanizing and critical pedagogies in underpinning and framing the HPPRN Model contributed to the sense of ‘lived’ humanizing pedagogy experiences during enactment. The multi-leveled inclusion of critical reflection in the development and enactment stages was found to further the processes of praxis employed at the university, which is integral to the advancement of humanizing and critical pedagogies. The development and implementation of a model that seeks to institutionalize the humanizing pedagogy at a university rely not only on sound theoretical conceptualization but also on the ‘richness of becoming more human’ explicitly expressed and encountered in praxes and application.

Keywords: humanizing pedagogy, critical pedagogy, institutional implementation, praxis

Procedia PDF Downloads 169
1966 Bee Colony Optimization Applied to the Bin Packing Problem

Authors: Kenza Aida Amara, Bachir Djebbar

Abstract:

We treat the two-dimensional bin packing problem which involves packing a given set of rectangles into a minimum number of larger identical rectangles called bins. This combinatorial problem is NP-hard. We propose a pretreatment for the oriented version of the problem that allows the valorization of the lost areas in the bins and the reduction of the size problem. A heuristic method based on the strategy first-fit adapted to this problem is presented. We present an approach of resolution by bee colony optimization. Computational results express a comparison of the number of bins used with and without pretreatment.

Keywords: bee colony optimization, bin packing, heuristic algorithm, pretreatment

Procedia PDF Downloads 640
1965 Content-Aware Image Augmentation for Medical Imaging Applications

Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang

Abstract:

Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.

Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving

Procedia PDF Downloads 230
1964 Toward an Informed Capacity Development Program in Inclusive and Sustainable Agricultural and Rural Development

Authors: Maria Ana T. Quimbo

Abstract:

As the Southeast Asian Regional Center for Graduate Study and Research in Agriculture (SEARCA) approaches its 50th founding anniversary. It continues to pursue its mission of strengthening the capacities of Southeast Asian leaders and institutions under its reformulated mission of Inclusive and Sustainable Agricultural and Rural Development (ISARD). Guided by this mission, this study analyzed the desired and priority capacity development needs of institutions heads and key personnel toward addressing the constraints, problems, and issues related to agricultural and rural development toward achieving their institutional goals. Adopting an exploratory, descriptive research design, the study examined the competency needs at the institutional and personnel levels. A total of 35 institution heads from seven countries and 40 key personnel from eight countries served as research participants. The results showed a variety of competencies in the areas of leadership and management, agriculture, climate change, research, monitoring, and evaluation, planning, and extension or community service. While mismatch was found in a number of desired and priority competency areas as perceived by the respondents, there were also interesting concordant answers in both technical and non-technical areas. Interestingly, the competency needs both desired and prioritized were a combination of “hard” or technical skills and “soft” or interpersonal skills. Policy recommendations were forwarded on the need to continue building capacities in core competencies along ISARD; have a balance of 'hard' skills and 'soft' skills through the use of appropriate training strategies and explicit statement in training objectives, strengthen awareness on “soft” skills through its integration in workplace culture, build capacity on action research, continue partnerships encourage mentoring, prioritize competencies, and build capacity of desired and priority competency areas.

Keywords: capacity development, competency needs assessment, sustainability and development, ISARD

Procedia PDF Downloads 381
1963 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application

Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior

Abstract:

Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.

Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks

Procedia PDF Downloads 175
1962 Comparative Analysis of Two Modeling Approaches for Optimizing Plate Heat Exchangers

Authors: Fábio A. S. Mota, Mauro A. S. S. Ravagnani, E. P. Carvalho

Abstract:

In the present paper the design of plate heat exchangers is formulated as an optimization problem considering two mathematical modeling. The number of plates is the objective function to be minimized, considering implicitly some parameters configuration. Screening is the optimization method used to solve the problem. Thermal and hydraulic constraints are verified, not viable solutions are discarded and the method searches for the convergence to the optimum, case it exists. A case study is presented to test the applicability of the developed algorithm. Results show coherency with the literature.

Keywords: plate heat exchanger, optimization, modeling, simulation

Procedia PDF Downloads 520
1961 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 129
1960 Drug Design Modelling and Molecular Virtual Simulation of an Optimized BSA-Based Nanoparticle Formulation Loaded with Di-Berberine Sulfate Acid Salt

Authors: Eman M. Sarhan, Doaa A. Ghareeb, Gabriella Ortore, Amr A. Amara, Mohamed M. El-Sayed

Abstract:

Drug salting and nanoparticle-based drug delivery formulations are considered to be an effective means for rendering the hydrophobic drugs’ nano-scale dispersion in aqueous media, and thus circumventing the pitfalls of their poor solubility as well as enhancing their membrane permeability. The current study aims to increase the bioavailability of quaternary ammonium berberine through acid salting and biodegradable bovine serum albumin (BSA)-based nanoparticulate drug formulation. Berberine hydroxide (BBR-OH) that was chemically synthesized by alkalization of the commercially available berberine hydrochloride (BBR-HCl) was then acidified to get Di-berberine sulfate (BBR)₂SO₄. The purified crystals were spectrally characterized. The desolvation technique was optimized for the preparation of size-controlled BSA-BBR-HCl, BSA-BBR-OH, and BSA-(BBR)₂SO₄ nanoparticles. Particle size, zeta potential, drug release, encapsulation efficiency, Fourier transform infrared spectroscopy (FTIR), tandem MS-MS spectroscopy, energy-dispersive X-ray spectroscopy (EDX), scanning and transmitting electron microscopic examination (SEM, TEM), in vitro bioactivity, and in silico drug-polymer interaction were determined. BSA (PDB ID; 4OR0) protonation state at different pH values was predicted using Amber12 molecular dynamic simulation. Then blind docking was performed using Lamarkian genetic algorithm (LGA) through AutoDock4.2 software. Results proved the purity and the size-controlled synthesis of berberine-BSA-nanoparticles. The possible binding poses, hydrophobic and hydrophilic interactions of berberine on BSA at different pH values were predicted. Antioxidant, anti-hemolytic, and cell differentiated ability of tested drugs and their nano-formulations were evaluated. Thus, drug salting and the potentially effective albumin berberine nanoparticle formulations can be successfully developed using a well-optimized desolvation technique and exhibiting better in vitro cellular bioavailability.

Keywords: berberine, BSA, BBR-OH, BBR-HCl, BSA-BBR-HCl, BSA-BBR-OH, (BBR)₂SO₄, BSA-(BBR)₂SO₄, FTIR, AutoDock4.2 Software, Lamarkian genetic algorithm, SEM, TEM, EDX

Procedia PDF Downloads 176
1959 DC/DC Boost Converter Applied to Photovoltaic Pumping System Application

Authors: S. Abdourraziq, M. A. Abdourraziq

Abstract:

One of the most famous and important applications of solar energy systems is water pumping. It is often used for irrigation or to supply water in countryside or private firm. However, the cost and the efficiency are still a concern, especially with a continued variation of solar radiation and temperature throughout the day. Then, the improvement of the efficiency of the system components is one of the different solutions to reducing the cost. In this paper, we will present a detailed definition of each element of a PV pumping system, and we will present the different MPPT algorithm used in the literature. Our system consists of a PV panel, a boost converter, a motor-pump set, and a storage tank.

Keywords: PV cell, converter, MPPT, MPP, PV pumping system

Procedia PDF Downloads 163
1958 Assessment of Efficiency of Underwater Undulatory Swimming Strategies Using a Two-Dimensional CFD Method

Authors: Dorian Audot, Isobel Margaret Thompson, Dominic Hudson, Joseph Banks, Martin Warner

Abstract:

In competitive swimming, after dives and turns, athletes perform underwater undulatory swimming (UUS), copying marine mammals’ method of locomotion. The body, performing this wave-like motion, accelerates the fluid downstream in its vicinity, generating propulsion with minimal resistance. Through this technique, swimmers can maintain greater speeds than surface swimming and take advantage of the overspeed granted by the dive (or push-off). Almost all previous work has considered UUS when performed at maximum effort. Critical parameters to maximize UUS speed are frequently discussed; however, this does not apply to most races. In only 3 out of the 16 individual competitive swimming events are athletes likely to attempt to perform UUS with the greatest speed, without thinking of the cost of locomotion. In the other cases, athletes will want to control the speed of their underwater swimming, attempting to maximise speed whilst considering energy expenditure appropriate to the duration of the event. Hence, there is a need to understand how swimmers adapt their underwater strategies to optimize the speed within the allocated energetic cost. This paper develops a consistent methodology that enables different sets of UUS kinematics to be investigated. These may have different propulsive efficiencies and force generation mechanisms (e.g.: force distribution along with the body and force magnitude). The developed methodology, therefore, needs to: (i) provide an understanding of the UUS propulsive mechanisms at different speeds, (ii) investigate the key performance parameters when UUS is not performed solely for maximizing speed; (iii) consistently determine the propulsive efficiency of a UUS technique. The methodology is separated into two distinct parts: kinematic data acquisition and computational fluid dynamics (CFD) analysis. For the kinematic acquisition, the position of several joints along the body and their sequencing were either obtained by video digitization or by underwater motion capture (Qualisys system). During data acquisition, the swimmers were asked to perform UUS at a constant depth in a prone position (facing the bottom of the pool) at different speeds: maximum effort, 100m pace, 200m pace and 400m pace. The kinematic data were input to a CFD algorithm employing a two-dimensional Large Eddy Simulation (LES). The algorithm adopted was specifically developed in order to perform quick unsteady simulations of deforming bodies and is therefore suitable for swimmers performing UUS. Despite its approximations, the algorithm is applied such that simulations are performed with the inflow velocity updated at every time step. It also enables calculations of the resistive forces (total and applied to each segment) and the power input of the modeled swimmer. Validation of the methodology is achieved by comparing the data obtained from the computations with the original data (e.g.: sustained swimming speed). This method is applied to the different kinematic datasets and provides data on swimmers’ natural responses to pacing instructions. The results show how kinematics affect force generation mechanisms and hence how the propulsive efficiency of UUS varies for different race strategies.

Keywords: CFD, efficiency, human swimming, hydrodynamics, underwater undulatory swimming

Procedia PDF Downloads 224
1957 Electrodermal Activity Measurement Using Constant Current AC Source

Authors: Cristian Chacha, David Asiain, Jesús Ponce de León, José Ramón Beltrán

Abstract:

This work explores and characterizes the behavior of the AFE AD5941 in impedance measurement using an embedded algorithm with a constant current AC source. The main aim of this research is to improve the exact measurement of impedance values for their application in EDA-focused wearable devices. Through comprehensive study and characterization, it has been observed that employing a measurement sequence with a constant current source produces results with increased dispersion but higher accuracy. As a result, this approach leads to a more accurate system for impedance measurement.

Keywords: EDA, constant current AC source, wearable, precision, accuracy, impedance

Procedia PDF Downloads 112
1956 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: multi-objective, analysis, data flow, freight delivery, methodology

Procedia PDF Downloads 182
1955 Optimization Process for Ride Quality of a Nonlinear Suspension Model Based on Newton-Euler’ Augmented Formulation

Authors: Mohamed Belhorma, Aboubakar S. Bouchikhi, Belkacem Bounab

Abstract:

This paper addresses modeling a Double A-Arm suspension, a three-dimensional nonlinear model has been developed using the multibody systems formalism. Dynamical study of the different components responses was done, particularly for the wheel assembly. To validate those results, the system was constructed and simulated by RecurDyn, a professional multibody dynamics simulation software. The model has been used as the Objectif function in an optimization algorithm for ride quality improvement.

Keywords: double A-Arm suspension, multibody systems, ride quality optimization, dynamic simulation

Procedia PDF Downloads 140
1954 A Combined Meta-Heuristic with Hyper-Heuristic Approach to Single Machine Production Scheduling Problem

Authors: C. E. Nugraheni, L. Abednego

Abstract:

This paper is concerned with minimization of mean tardiness and flow time in a real single machine production scheduling problem. Two variants of genetic algorithm as meta-heuristic are combined with hyper-heuristic approach are proposed to solve this problem. These methods are used to solve instances generated with real world data from a company. Encouraging results are reported.

Keywords: hyper-heuristics, evolutionary algorithms, production scheduling, meta-heuristic

Procedia PDF Downloads 385
1953 Battling the Final Stages of Genocide in Bosnia and Herzegovina: Denial and Triumphalism

Authors: Ehlimana Memisevic

Abstract:

Genocide denial is considered the final stage of genocide, which in the words of Gregory H. Stanton, represents "one of the most certain indicators of future genocides”. Genocide denial in Bosnia and Herzegovina started in 1992, almost simultaneously with the genocide itself. Over the course of the three decades, different forms of genocide and war crimes denial have been developed by state officials, politicians, journalists, and civilians, both in Republika Srpska – the Serb-dominated entity within Bosnia and Herzegovina – and Serbia. Moreover, genocide and war crimes are not only denied but also glorified and celebrated, which was described as "triumphalism" by the Australian-Bosnian scholar Hariz Halilovich who suggested it be added as the 11th phase of Gregory Stanton's "10 stages of genocide." Since 2007, there have been a number of attempts to criminalize genocide denial at the state level in Bosnia and Herzegovina. However, all of them were unsuccessful due to the opposition of representatives of Republika Srpska. On July 23, 2021, the High Representative in Bosnia and Herzegovina, Valentin Inzko, used his power as the final authority in overseeing the civil implementation of the Dayton Peace Accords to impose amendments to Bosnia and Herzegovina's criminal code to ban the denial and glorification of genocide, crimes against humanity and war crimes. However, immediately after the OHR's decision was announced, Milorad Dodik, a Serb member of Bosnia's tripartite presidency, held a press conference, publicly denied the genocide, and announced that this law would never be accepted in Republika Srpska. Denial remains explicit and public and is promulgated through official channels in Bosnia and Herzegovina. This paper will analyze the forms of genocide and other war crimes denial and glorification in the period after the amendments to the Criminal Code of Bosnia and Herzegovina were introduced, which include incrimination of public condoning, denial, gross trivialization or justification of a crime of genocide, crimes against humanity or a war crime established by a final adjudication of the international and domestic courts. We aim to determine the effect of the imposed law and the impact of the denial committed by high-ranking public officials on the denial and celebration of genocide and war crimes committed by ordinary citizens.

Keywords: genocide, denial, triumphalism, incrimination

Procedia PDF Downloads 80
1952 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core

Authors: Yashas Bedre Raghavendra, Pim Vullers

Abstract:

This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.

Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction

Procedia PDF Downloads 74
1951 Shear Surface and Localized Waves in Functionally Graded Piezoactive Electro-Magneto-Elastic Media

Authors: Karen B. Ghazaryan

Abstract:

Recently, the propagation of coupled electromagnetic and elastic waves in magneto-electro-elastic (MEE) structures attracted much attention due to the wide range of application of these materials in smart structures. MEE materials are a class of new artificial composites that consist of simultaneous piezoelectric and piezomagnetic phases. Magneto-electro-elastic composites are built up by combining piezoelectric and piezomagnetic phases to obtain a smart composite that presents not only the electromechanical and magneto-mechanical coupling but also a strong magnetoelectric coupling, which makes such materials highly valuable in technological usage. In the framework of quasi-static approach shear surface and localized waves are considered in magneto-electro-elastic piezo-active structure consisting of functionally graded 6mm hexagonal symmetry group crystals. Assuming that in a functionally graded material the elastic and electromagnetic properties vary in the same proportion in direction perpendicular to the MEE polling direction, special classes of inhomogeneity functions were found, admitting exact solutions for coupled electromagnetic and elastic wave fields. Based on these exact solutions, defining the coupled shear wave field in magneto-electro-elastic composites several modal problems are considered: shear surface waves propagation along surface of a MEE half-space, interfacial wave propagation in a MEE oppositely polarized bi-layer, Love type waves in a functionally graded MEE layer overlying a homogeneous elastic half-space. For the problems under consideration corresponding dispersion equations are deduced analytically in an explicit form and for the BaTiO₃–CoFe₂O₄ crystal numerical results estimating effects of inhomogeneity and piezo effect are carried out.

Keywords: surface shear waves, magneto-electro-elastic composites, piezoactive crystals, functionally graded elastic materials

Procedia PDF Downloads 217
1950 Subjective Evaluation of Mathematical Morphology Edge Detection on Computed Tomography (CT) Images

Authors: Emhimed Saffor

Abstract:

In this paper, the problem of edge detection in digital images is considered. Three methods of edge detection based on mathematical morphology algorithm were applied on two sets (Brain and Chest) CT images. 3x3 filter for first method, 5x5 filter for second method and 7x7 filter for third method under MATLAB programming environment. The results of the above-mentioned methods are subjectively evaluated. The results show these methods are more efficient and satiable for medical images, and they can be used for different other applications.

Keywords: CT images, Matlab, medical images, edge detection

Procedia PDF Downloads 339
1949 Optimal Placement of the Unified Power Controller to Improve the Power System Restoration

Authors: Mohammad Reza Esmaili

Abstract:

One of the most important parts of the restoration process of a power network is the synchronizing of its subsystems. In this situation, the biggest concern of the system operators will be the reduction of the standing phase angle (SPA) between the endpoints of the two islands. In this regard, the system operators perform various actions and maneuvers so that the synchronization operation of the subsystems is successfully carried out and the system finally reaches acceptable stability. The most common of these actions include load control, generation control and, in some cases, changing the network topology. Although these maneuvers are simple and common, due to the weak network and extreme load changes, the restoration will be associated with low speed. One of the best ways to control the SPA is to use FACTS devices. By applying a soft control signal, these tools can reduce the SPA between two subsystems with more speed and accuracy, and the synchronization process can be done in less time. Meanwhile, the unified power controller (UPFC), a series-parallel compensator device with the change of transmission line power and proper adjustment of the phase angle, will be the proposed option in order to realize the subject of this research. Therefore, with the optimal placement of UPFC in a power system, in addition to improving the normal conditions of the system, it is expected to be effective in reducing the SPA during power system restoration. Therefore, the presented paper provides an optimal structure to coordinate the three problems of improving the division of subsystems, reducing the SPA and optimal power flow with the aim of determining the optimal location of UPFC and optimal subsystems. The proposed objective functions in this paper include maximizing the quality of the subsystems, reducing the SPA at the endpoints of the subsystems, and reducing the losses of the power system. Since there will be a possibility of creating contradictions in the simultaneous optimization of the proposed objective functions, the structure of the proposed optimization problem is introduced as a non-linear multi-objective problem, and the Pareto optimization method is used to solve it. The innovative technique proposed to implement the optimization process of the mentioned problem is an optimization algorithm called the water cycle (WCA). To evaluate the proposed method, the IEEE 39 bus power system will be used.

Keywords: UPFC, SPA, water cycle algorithm, multi-objective problem, pareto

Procedia PDF Downloads 72
1948 ANAC-id - Facial Recognition to Detect Fraud

Authors: Giovanna Borges Bottino, Luis Felipe Freitas do Nascimento Alves Teixeira

Abstract:

This article aims to present a case study of the National Civil Aviation Agency (ANAC) in Brazil, ANAC-id. ANAC-id is the artificial intelligence algorithm developed for image analysis that recognizes standard images of unobstructed and uprighted face without sunglasses, allowing to identify potential inconsistencies. It combines YOLO architecture and 3 libraries in python - face recognition, face comparison, and deep face, providing robust analysis with high level of accuracy.

Keywords: artificial intelligence, deepface, face compare, face recognition, YOLO, computer vision

Procedia PDF Downloads 162
1947 Relevant LMA Features for Human Motion Recognition

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets.

Keywords: discriminative LMA features, features reduction, human motion recognition, random forest

Procedia PDF Downloads 201
1946 Advancing Inclusive Curriculum Development for Special Needs Education in Africa

Authors: Onosedeba Mary Ayayia

Abstract:

Inclusive education has emerged as a critical global imperative, aiming to provide equitable educational opportunities for all, regardless of their abilities or disabilities. In Africa, the pursuit of inclusive education faces significant challenges, particularly concerning the development and implementation of inclusive curricula tailored to the diverse needs of students with disabilities. This study delves into the heart of this issue, seeking to address the pressing problem of exclusion and marginalization of students with disabilities in mainstream educational systems across the continent. The problem is complex, entailing issues of limited access to tailored curricula, shortages of qualified teachers in special needs education, stigmatization, limited research and data, policy gaps, inadequate resources, and limited community awareness. These challenges perpetuate a system where students with disabilities are systematically excluded from quality education, limiting their future opportunities and societal contributions. This research proposes a comprehensive examination of the current state of inclusive curriculum development and implementation in Africa. Through an innovative and explicit exploration of the problem, the study aims to identify effective strategies, guidelines, and best practices that can inform the development of inclusive curricula. These curricula will be designed to address the diverse learning needs of students with disabilities, promote teacher capacity building, combat stigmatization, generate essential data, enhance policy coherence, allocate adequate resources, and raise community awareness. The goal of this research is to contribute to the advancement of inclusive education in Africa by fostering an educational environment where every student, regardless of ability or disability, has equitable access to quality education. Through this endeavor, the study aligns with the broader global pursuit of social inclusion and educational equity, emphasizing the importance of inclusive curricula as a foundational step towards a more inclusive and just society.

Keywords: inclusive education, special education, curriculum development, Africa

Procedia PDF Downloads 66
1945 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 75
1944 A Time-Reducible Approach to Compute Determinant |I-X|

Authors: Wang Xingbo

Abstract:

Computation of determinant in the form |I-X| is primary and fundamental because it can help to compute many other determinants. This article puts forward a time-reducible approach to compute determinant |I-X|. The approach is derived from the Newton’s identity and its time complexity is no more than that to compute the eigenvalues of the square matrix X. Mathematical deductions and numerical example are presented in detail for the approach. By comparison with classical approaches the new approach is proved to be superior to the classical ones and it can naturally reduce the computational time with the improvement of efficiency to compute eigenvalues of the square matrix.

Keywords: algorithm, determinant, computation, eigenvalue, time complexity

Procedia PDF Downloads 420