Search results for: deductive reasoning algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3951

Search results for: deductive reasoning algorithm

1911 A Survey on Important Factors of the Ethereum Network Performance

Authors: Ali Mohammad Mobaser Azad, Alireza Akhlaghinia

Abstract:

Blockchain is changing our world and launching a new generation of decentralized networks. Meanwhile, Blockchain-based networks like Ethereum have been created and they will facilitate these processes using tools like smart contracts. The Ethereum has fundamental structures, each of which affects the activity of the nodes. Our purpose in this paper is to review similar research and examine various components to demonstrate the performance of the Ethereum network and to do this, and we used the data published by the Ethereum Foundation in different time spots to examine the number of changes that determine the status of network performance. This will help other researchers understand better Ethereum in different situations.

Keywords: blockchain, ethereum, smart contract, decentralization consensus algorithm

Procedia PDF Downloads 216
1910 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving

Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian

Abstract:

In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.

Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning

Procedia PDF Downloads 138
1909 Rainwater Harvesting and Management of Ground Water (Case Study Weather Modification Project in Iran)

Authors: Samaneh Poormohammadi, Farid Golkar, Vahideh Khatibi Sarabi

Abstract:

Climate change and consecutive droughts have increased the importance of using rainwater harvesting methods. One of the methods of rainwater harvesting and, in other words, the management of atmospheric water resources is the use of weather modification technologies. Weather modification (also known as weather control) is the act of intentionally manipulating or altering the weather. The most common form of weather modification is cloud seeding, which increases rain or snow, usually for the purpose of increasing the local water supply. Cloud seeding operations in Iran have been married since 1999 in central Iran with the aim of harvesting rainwater and reducing the effects of drought. In this research, we analyze the results of cloud seeding operations in the Simindashtplain in northern Iran. Rainwater harvesting with the help of cloud seeding technology has been evaluated through its effects on surface water and underground water. For this purpose, two different methods have been used to estimate runoff. The first method is the US Soil Conservation Service (SCS) curve number method. Another method, known as the reasoning method, has also been used. In order to determine the infiltration rate of underground water, the balance reports of the comprehensive water plan of the country have been used. In this regard, the study areas located in the target area of each province have been extracted by drawing maps of the influence coefficients of each area in the GIS software. It should be mentioned that the infiltration coefficients were taken from the balance sheet reports of the country's comprehensive water plan. Then, based on the area of each study area, the weighted average of the infiltration coefficient of the study areas located in the target area of each province is considered as the infiltration coefficient of that province. Results show that the amount of water extracted from the rain with the help of cloud seeding projects in Simindasht is as follows: an increase in runoff 63.9 million cubic meters (with SCS equation) or 51.2 million cubic meters (with logical equation) and an increase in ground water resources: 40.5 million cubic meters.

Keywords: rainwater harvesting, ground water, atmospheric water resources, weather modification, cloud seeding

Procedia PDF Downloads 102
1908 Classic Training of a Neural Observer for Estimation Purposes

Authors: R. Loukil, M. Chtourou, T. Damak

Abstract:

This paper investigates the training of multilayer neural network using the classic approach. Then, for estimation purposes, we suggest the use of a specific neural observer that we study its training algorithm which is the back-propagation one in the case of the disponibility of the state and in the case of an unmeasurable state. A MATLAB simulation example will be studied to highlight the usefulness of this kind of observer.

Keywords: training, estimation purposes, neural observer, back-propagation, unmeasurable state

Procedia PDF Downloads 564
1907 An Object-Based Image Resizing Approach

Authors: Chin-Chen Chang, I-Ta Lee, Tsung-Ta Ke, Wen-Kai Tai

Abstract:

Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.

Keywords: energy map, visual saliency, gradient map, seam carving

Procedia PDF Downloads 473
1906 Adaptive CFAR Analysis for Non-Gaussian Distribution

Authors: Bouchemha Amel, Chachoui Takieddine, H. Maalem

Abstract:

Automatic detection of targets in a modern communication system RADAR is based primarily on the concept of adaptive CFAR detector. To have an effective detection, we must minimize the influence of disturbances due to the clutter. The detection algorithm adapts the CFAR detection threshold which is proportional to the average power of the clutter, maintaining a constant probability of false alarm. In this article, we analyze the performance of two variants of adaptive algorithms CA-CFAR and OS-CFAR and we compare the thresholds of these detectors in the marine environment (no-Gaussian) with a Weibull distribution.

Keywords: CFAR, threshold, clutter, distribution, Weibull, detection

Procedia PDF Downloads 578
1905 Analysis of the Inverse Kinematics for 5 DOF Robot Arm Using D-H Parameters

Authors: Apurva Patil, Maithilee Kulkarni, Ashay Aswale

Abstract:

This paper proposes an algorithm to develop the kinematic model of a 5 DOF robot arm. The formulation of the problem is based on finding the D-H parameters of the arm. Brute Force iterative method is employed to solve the system of non linear equations. The focus of the paper is to obtain the accurate solutions by reducing the root mean square error. The result obtained will be implemented to grip the objects. The trajectories followed by the end effector for the required workspace coordinates are plotted. The methodology used here can be used in solving the problem for any other kinematic chain of up to six DOF.

Keywords: 5 DOF robot arm, D-H parameters, inverse kinematics, iterative method, trajectories

Procedia PDF Downloads 197
1904 Nonlinear Observer Canonical Form for Genetic Regulation Process

Authors: Bououden Soraya

Abstract:

This paper aims to study the existence of the change of coordinates which permits to transform a class of nonlinear dynamical systems into the so-called nonlinear observer canonical form (NOCF). Moreover, an algorithm to construct such a change of coordinates is given. Based on this form, we can design an observer with a linear error dynamic. This enables us to estimate the state of a nonlinear dynamical system. A concrete example (biological model) is provided to illustrate the feasibility of the proposed results.

Keywords: nonlinear observer canonical form, observer, design, gene regulation, gene expression

Procedia PDF Downloads 428
1903 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines

Authors: Kamyar Tolouei, Ehsan Moosavi

Abstract:

In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.

Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization

Procedia PDF Downloads 97
1902 Toward Subtle Change Detection and Quantification in Magnetic Resonance Neuroimaging

Authors: Mohammad Esmaeilpour

Abstract:

One of the important open problems in the field of medical image processing is detection and quantification of small changes. In this poster, we try to investigate that, how the algebraic decomposition techniques can be used for semiautomatically detecting and quantifying subtle changes in Magnetic Resonance (MR) neuroimaging volumes. We mostly focus on the low-rank values of the matrices achieved from decomposing MR image pairs during a period of time. Besides, a skillful neuroradiologist will help the algorithm to distinguish between noises and small changes.

Keywords: magnetic resonance neuroimaging, subtle change detection and quantification, algebraic decomposition, basis functions

Procedia PDF Downloads 467
1901 Clinch Process Simulation Using Diffuse Elements

Authors: Benzegaou Ali, Brani Benabderrahmane

Abstract:

This work describes a numerical study of the TOX–clinching process using diffuse elements. A computer code baptized SEMA "Static Explicit Method Analysis" is developed to simulate the clinch joining process. The FE code is based on an Updated Lagrangian scheme. The used resolution method is based on an explicit static approach. The integration of the elasto-plastic behavior law is realized with an algorithm of Simo and Taylor. The tools are represented by plane facets.

Keywords: diffuse elements, numerical simulation, clinching, contact, large deformation

Procedia PDF Downloads 356
1900 Prescription of Lubricating Eye Drops in the Emergency Eye Department: A Quality Improvement Project

Authors: Noorulain Khalid, Unsaar Hayat, Muhammad Chaudhary, Christos Iosifidis, Felipe Dhawahir-Scala, Fiona Carley

Abstract:

Dry eye disease (DED) is a common condition seen in the emergency eye department (EED) at Manchester Royal Eye Hospital (MREH). However, there is variability in the prescription of lubricating eye drops among different healthcare providers. The aim of this study was to develop an up-to-date, standardized algorithm for the prescription of lubricating eye drops in the EED at MREH based on international and national guidelines. The study also aimed to assess the impact of implementing the guideline on the rate of inappropriate lubricant prescriptions. Primarily, the impact was to be assessed in the form of the appropriateness of prescriptions for patients’ DED. The impact was secondary to be assessed through analysis of the cost to the hospital. Data from 845 patients who attended the EED over a 3-month period were analyzed, and 157 patients met the inclusion and exclusion criteria. After conducting a review of the literature and collaborating with the corneal team, an algorithm for the prescription of lubricants in the EED was developed. Three plan-do-study-act (PDSA) cycles were conducted, with interventions such as emails, posters, in-person reminders, and education for incoming trainees. The appropriateness of prescriptions was evaluated against the guidelines. Data were collected from patient records and analyzed using statistical methods. The appropriateness of prescriptions was assessed by comparing them to the guidelines and by clinical correlation with a specialized registrar. The study found a substantial improvement in the number of appropriate prescriptions, with an increase from 55% to 93% over the three PDSA cycles. There was additionally a 51% reduction in expenditure on lubricant prescriptions, resulting in cost savings for the hospital (approximate saving of £50/week). Theoretical importance: Appropriate prescription of lubricating eye drops improves disease management for patients and reduces costs for the hospital. The development and implementation of a standardized guideline facilitate the achievement of these goals. Conclusion: This study highlights the inconsistent management of DED in the EED and the potential lack of training in this area for healthcare providers. The implementation of a standardized, easy-to-follow guideline for lubricating eye drops can help to improve disease management while also resulting in cost savings for the hospital.

Keywords: lubrication, dry eye disease, guideline, prescription

Procedia PDF Downloads 63
1899 A Task Scheduling Algorithm in Cloud Computing

Authors: Ali Bagherinia

Abstract:

Efficient task scheduling method can meet users' requirements, and improve the resource utilization, then increase the overall performance of the cloud computing environment. Cloud computing has new features, such as flexibility, virtualization and etc., in this paper we propose a two levels task scheduling method based on load balancing in cloud computing. This task scheduling method meet user's requirements and get high resource utilization, that simulation results in CloudSim simulator prove this.

Keywords: cloud computing, task scheduling, virtualization, SLA

Procedia PDF Downloads 396
1898 Securing Mobile Ad-Hoc Network Utilizing OPNET Simulator

Authors: Tariq A. El Shheibia, Halima Mohamed Belhamad

Abstract:

This paper is considered securing data based on multi-path protocol (SDMP) in mobile ad hoc network utilizing OPNET simulator modular 14.5, including the AODV routing protocol at the network as based multi-path algorithm for message security in MANETs. The main idea of this work is to present a way that is able to detect the attacker inside the MANETs. The detection for this attacker will be performed by adding some effective parameters to the network.

Keywords: MANET, AODV, malicious node, OPNET

Procedia PDF Downloads 289
1897 Deep Q-Network for Navigation in Gazebo Simulator

Authors: Xabier Olaz Moratinos

Abstract:

Drone navigation is critical, particularly during the initial phases, such as the initial ascension, where pilots may fail due to strong external interferences that could potentially lead to a crash. In this ongoing work, a drone has been successfully trained to perform an ascent of up to 6 meters at speeds with external disturbances pushing it up to 24 mph, with the DQN algorithm managing external forces affecting the system. It has been demonstrated that the system can control its height, position, and stability in all three axes (roll, pitch, and yaw) throughout the process. The learning process is carried out in the Gazebo simulator, which emulates interferences, while ROS is used to communicate with the agent.

Keywords: machine learning, DQN, Gazebo, navigation

Procedia PDF Downloads 71
1896 Interaction Between Task Complexity and Collaborative Learning on Virtual Patient Design: The Effects on Students’ Performance, Cognitive Load, and Task Time

Authors: Fatemeh Jannesarvatan, Ghazaal Parastooei, Jimmy frerejan, Saedeh Mokhtari, Peter Van Rosmalen

Abstract:

Medical and dental education increasingly emphasizes the acquisition, integration, and coordination of complex knowledge, skills, and attitudes that can be applied in practical situations. Instructional design approaches have focused on using real-life tasks in order to facilitate complex learning in both real and simulated environments. The Four component instructional design (4C/ID) model has become a useful guideline for designing instructional materials that improve learning transfer, especially in health profession education. The objective of this study was to apply the 4C/ID model in the creation of virtual patients (VPs) that dental students can use to practice their clinical management and clinical reasoning skills. The study first explored the context and concept of complication factors and common errors for novices and how they can affect the design of a virtual patient program. The study then selected key dental information and considered the content needs of dental students. The design of virtual patients was based on the 4C/ID model's fundamental principles, which included: Designing learning tasks that reflect real patient scenarios and applying different levels of task complexity to challenge students to apply their knowledge and skills in different contexts. Creating varied learning materials that support students during the VP program and are closely integrated with the learning tasks and students' curricula. Cognitive feedback was provided at different levels of the program. Providing procedural information where students followed a step-by-step process from history taking to writing a comprehensive treatment plan. Four virtual patients were designed using the 4C/ID model's principles, and an experimental design was used to test the effectiveness of the principles in achieving the intended educational outcomes. The 4C/ID model provides an effective framework for designing engaging and successful virtual patients that support the transfer of knowledge and skills for dental students. However, there are some challenges and pitfalls that instructional designers should take into account when developing these educational tools.

Keywords: 4C/ID model, virtual patients, education, dental, instructional design

Procedia PDF Downloads 76
1895 Dynamic Communications Mapping in NoC-Based Heterogeneous MPSoCs

Authors: M. K. Benhaoua, A. K. Singh, A. E. H. Benyamina

Abstract:

In this paper, we propose heuristic for dynamic communications mapping that considers the placement of communications in order to optimize the overall performance. The mapping technique uses a newly proposed Algorithm to place communications between the tasks. The placement we propose of the communications leads to a better optimization of several performance metrics (time and energy consumption). Experimental results show that the proposed mapping approach provides significant performance improvements when compared to those using static routing.

Keywords: Multi-Processor Systems-on-Chip (MPSoCs), Network-on-Chip (NoC), heterogeneous architectures, dynamic mapping heuristics

Procedia PDF Downloads 525
1894 Effectiveness of Medication and Non-Medication Therapy on Working Memory of Children with Attention Deficit and Hyperactivity Disorder

Authors: Mohaammad Ahmadpanah, Amineh Akhondi, Mohammad Haghighi, Ali Ghaleiha, Leila Jahangard, Elham Salari

Abstract:

Background: Working memory includes the capability to keep and manipulate information in a short period of time. This capability is the basis of complicated judgments and has been attended to as the specific and constant character of individuals. Children with attention deficit and hyperactivity are among the people suffering from deficiency in the active memory, and this deficiency has been attributed to the problem of frontal lobe. This study utilizes a new approach with suitable tasks and methods for training active memory and assessment of the effects of the trainings. Participants: The children participating in this study were of 7-15 year age, who were diagnosed by the psychiatrist and psychologist as hyperactive and attention deficit based on DSM-IV criteria. The intervention group was consisted of 8 boys and 6 girls with the average age of 11 years and standard deviation of 2, and the control group was consisted of 2 girls and 5 boys with an average age of 11.4 and standard deviation of 3. Three children in the test group and two in the control group were under medicinal therapy. Results: Working memory training meaningfully improved the performance in not-trained areas as visual-spatial working memory as well as the performance in Raven progressive tests which are a perfect example of non-verbal, complicated reasoning tasks. In addition, motional activities – measured based on the number of head movements during computerized measuring program – was meaningfully reduced in the medication group. The results of the second test showed that training similar exercise to teenagers and adults results in the improvement of cognition functions, as in hyperactive people. Discussion: The results of this study showed that the performance of working memory is improved through training, and these trainings are extended and generalized in other areas of cognition functions not receiving any training. Trainings resulted in the improvement of performance in the tasks related to prefrontal. They had also a positive and meaningful impact on the moving activities of hyperactive children.

Keywords: attention deficit hyperactivity disorder, working memory, non-medical treatment, children

Procedia PDF Downloads 363
1893 Bee Colony Optimization Applied to the Bin Packing Problem

Authors: Kenza Aida Amara, Bachir Djebbar

Abstract:

We treat the two-dimensional bin packing problem which involves packing a given set of rectangles into a minimum number of larger identical rectangles called bins. This combinatorial problem is NP-hard. We propose a pretreatment for the oriented version of the problem that allows the valorization of the lost areas in the bins and the reduction of the size problem. A heuristic method based on the strategy first-fit adapted to this problem is presented. We present an approach of resolution by bee colony optimization. Computational results express a comparison of the number of bins used with and without pretreatment.

Keywords: bee colony optimization, bin packing, heuristic algorithm, pretreatment

Procedia PDF Downloads 629
1892 Content-Aware Image Augmentation for Medical Imaging Applications

Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang

Abstract:

Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.

Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving

Procedia PDF Downloads 211
1891 Comparative Analysis of Two Modeling Approaches for Optimizing Plate Heat Exchangers

Authors: Fábio A. S. Mota, Mauro A. S. S. Ravagnani, E. P. Carvalho

Abstract:

In the present paper the design of plate heat exchangers is formulated as an optimization problem considering two mathematical modeling. The number of plates is the objective function to be minimized, considering implicitly some parameters configuration. Screening is the optimization method used to solve the problem. Thermal and hydraulic constraints are verified, not viable solutions are discarded and the method searches for the convergence to the optimum, case it exists. A case study is presented to test the applicability of the developed algorithm. Results show coherency with the literature.

Keywords: plate heat exchanger, optimization, modeling, simulation

Procedia PDF Downloads 512
1890 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 119
1889 Drug Design Modelling and Molecular Virtual Simulation of an Optimized BSA-Based Nanoparticle Formulation Loaded with Di-Berberine Sulfate Acid Salt

Authors: Eman M. Sarhan, Doaa A. Ghareeb, Gabriella Ortore, Amr A. Amara, Mohamed M. El-Sayed

Abstract:

Drug salting and nanoparticle-based drug delivery formulations are considered to be an effective means for rendering the hydrophobic drugs’ nano-scale dispersion in aqueous media, and thus circumventing the pitfalls of their poor solubility as well as enhancing their membrane permeability. The current study aims to increase the bioavailability of quaternary ammonium berberine through acid salting and biodegradable bovine serum albumin (BSA)-based nanoparticulate drug formulation. Berberine hydroxide (BBR-OH) that was chemically synthesized by alkalization of the commercially available berberine hydrochloride (BBR-HCl) was then acidified to get Di-berberine sulfate (BBR)₂SO₄. The purified crystals were spectrally characterized. The desolvation technique was optimized for the preparation of size-controlled BSA-BBR-HCl, BSA-BBR-OH, and BSA-(BBR)₂SO₄ nanoparticles. Particle size, zeta potential, drug release, encapsulation efficiency, Fourier transform infrared spectroscopy (FTIR), tandem MS-MS spectroscopy, energy-dispersive X-ray spectroscopy (EDX), scanning and transmitting electron microscopic examination (SEM, TEM), in vitro bioactivity, and in silico drug-polymer interaction were determined. BSA (PDB ID; 4OR0) protonation state at different pH values was predicted using Amber12 molecular dynamic simulation. Then blind docking was performed using Lamarkian genetic algorithm (LGA) through AutoDock4.2 software. Results proved the purity and the size-controlled synthesis of berberine-BSA-nanoparticles. The possible binding poses, hydrophobic and hydrophilic interactions of berberine on BSA at different pH values were predicted. Antioxidant, anti-hemolytic, and cell differentiated ability of tested drugs and their nano-formulations were evaluated. Thus, drug salting and the potentially effective albumin berberine nanoparticle formulations can be successfully developed using a well-optimized desolvation technique and exhibiting better in vitro cellular bioavailability.

Keywords: berberine, BSA, BBR-OH, BBR-HCl, BSA-BBR-HCl, BSA-BBR-OH, (BBR)₂SO₄, BSA-(BBR)₂SO₄, FTIR, AutoDock4.2 Software, Lamarkian genetic algorithm, SEM, TEM, EDX

Procedia PDF Downloads 166
1888 Crisis Management and Corporate Political Activism: A Qualitative Analysis of Online Reactions toward Tesla

Authors: Roxana D. Maiorescu-Murphy

Abstract:

In the US, corporations have recently embraced political stances in an attempt to respond to the external pressure exerted by activist groups. To date, research in this area remains in its infancy, and few studies have been conducted on the way stakeholder groups respond to corporate political advocacy in general and in the immediacy of such a corporate announcement in particular. The current study aims to fill in this research void. In addition, the study contributes to an emerging trajectory in the field of crisis management by focusing on the delineation between crises (unexpected events related to products and services) and scandals (crises that spur moral outrage). The present study looked at online reactions in the aftermath of Elon Musk’s endorsement of the Republican party on Twitter. Two data sets were collected from Twitter following two political endorsements made by Elon Musk on May 18, 2022, and June 15, 2022, respectively. The total sample of analysis stemming from the data two sets consisted of N=1,374 user comments written as a response to Musk’s initial tweets. Given the paucity of studies in the preceding research areas, the analysis employed a case study methodology, used in circumstances in which the phenomena to be studied had not been researched before. According to the case study methodology, which answers the questions of how and why a phenomenon occurs, this study responded to the research questions of how online users perceived Tesla and why they did so. The data were analyzed in NVivo by the use of the grounded theory methodology, which implied multiple exposures to the text and the undertaking of an inductive-deductive approach. Through multiple exposures to the data, the researcher ascertained the common themes and subthemes in the online discussion. Each theme and subtheme were later defined and labeled. Additional exposures to the text ensured that these were exhaustive. The results revealed that the CEO’s political endorsements triggered moral outrage, leading to Tesla’s facing a scandal as opposed to a crisis. The moral outrage revolved around the stakeholders’ predominant rejection of a perceived intrusion of an influential figure on a domain reserved for voters. As expected, Musk’s political endorsements led to polarizing opinions, and those who opposed his views engaged in online activism aimed to boycott the Tesla brand. These findings reveal that the moral outrage that characterizes a scandal requires communication practices that differ from those that practitioners currently borrow from the field of crisis management. Specifically, because scandals flourish in online settings, practitioners should regularly monitor stakeholder perceptions and address them in real-time. While promptness is essential when managing crises, it becomes crucial to respond immediately as a scandal is flourishing online. Finally, attempts should be made to distance a brand, its products, and its CEO from the latter’s political views.

Keywords: crisis management, communication management, Tesla, corporate political activism, Elon Musk

Procedia PDF Downloads 87
1887 DC/DC Boost Converter Applied to Photovoltaic Pumping System Application

Authors: S. Abdourraziq, M. A. Abdourraziq

Abstract:

One of the most famous and important applications of solar energy systems is water pumping. It is often used for irrigation or to supply water in countryside or private firm. However, the cost and the efficiency are still a concern, especially with a continued variation of solar radiation and temperature throughout the day. Then, the improvement of the efficiency of the system components is one of the different solutions to reducing the cost. In this paper, we will present a detailed definition of each element of a PV pumping system, and we will present the different MPPT algorithm used in the literature. Our system consists of a PV panel, a boost converter, a motor-pump set, and a storage tank.

Keywords: PV cell, converter, MPPT, MPP, PV pumping system

Procedia PDF Downloads 152
1886 Assessment of Efficiency of Underwater Undulatory Swimming Strategies Using a Two-Dimensional CFD Method

Authors: Dorian Audot, Isobel Margaret Thompson, Dominic Hudson, Joseph Banks, Martin Warner

Abstract:

In competitive swimming, after dives and turns, athletes perform underwater undulatory swimming (UUS), copying marine mammals’ method of locomotion. The body, performing this wave-like motion, accelerates the fluid downstream in its vicinity, generating propulsion with minimal resistance. Through this technique, swimmers can maintain greater speeds than surface swimming and take advantage of the overspeed granted by the dive (or push-off). Almost all previous work has considered UUS when performed at maximum effort. Critical parameters to maximize UUS speed are frequently discussed; however, this does not apply to most races. In only 3 out of the 16 individual competitive swimming events are athletes likely to attempt to perform UUS with the greatest speed, without thinking of the cost of locomotion. In the other cases, athletes will want to control the speed of their underwater swimming, attempting to maximise speed whilst considering energy expenditure appropriate to the duration of the event. Hence, there is a need to understand how swimmers adapt their underwater strategies to optimize the speed within the allocated energetic cost. This paper develops a consistent methodology that enables different sets of UUS kinematics to be investigated. These may have different propulsive efficiencies and force generation mechanisms (e.g.: force distribution along with the body and force magnitude). The developed methodology, therefore, needs to: (i) provide an understanding of the UUS propulsive mechanisms at different speeds, (ii) investigate the key performance parameters when UUS is not performed solely for maximizing speed; (iii) consistently determine the propulsive efficiency of a UUS technique. The methodology is separated into two distinct parts: kinematic data acquisition and computational fluid dynamics (CFD) analysis. For the kinematic acquisition, the position of several joints along the body and their sequencing were either obtained by video digitization or by underwater motion capture (Qualisys system). During data acquisition, the swimmers were asked to perform UUS at a constant depth in a prone position (facing the bottom of the pool) at different speeds: maximum effort, 100m pace, 200m pace and 400m pace. The kinematic data were input to a CFD algorithm employing a two-dimensional Large Eddy Simulation (LES). The algorithm adopted was specifically developed in order to perform quick unsteady simulations of deforming bodies and is therefore suitable for swimmers performing UUS. Despite its approximations, the algorithm is applied such that simulations are performed with the inflow velocity updated at every time step. It also enables calculations of the resistive forces (total and applied to each segment) and the power input of the modeled swimmer. Validation of the methodology is achieved by comparing the data obtained from the computations with the original data (e.g.: sustained swimming speed). This method is applied to the different kinematic datasets and provides data on swimmers’ natural responses to pacing instructions. The results show how kinematics affect force generation mechanisms and hence how the propulsive efficiency of UUS varies for different race strategies.

Keywords: CFD, efficiency, human swimming, hydrodynamics, underwater undulatory swimming

Procedia PDF Downloads 214
1885 Compromising Quality of Life in Low Income Settlement's: The Case of Ashrayan Prakalpa, Khulna

Authors: Salma Akter, Md. Kamal Uddin

Abstract:

This study aims to demonstrate how top-down shelter policy and its resultant dwelling environment leads to ‘everyday compromise’ by the grassroots according to subjective (satisfaction) and objective (physical design elements and physical environmental elements) indicators, which are measured across three levels of the settlement; macro (Community), meso (Neighborhood or shelter/built environment) and micro (family). Ashrayan Prakalpa is a resettlement /housing project of Government of Bangladesh for providing shelters and human resources development activities like education, microcredit, and training programme to landless, homeless and rootless people. Despite the integrated nature of the shelter policies (comprises poverty alleviation, employment opportunity, secured tenure, and livelihood training), the ‘quality of life’ issue at the different levels of settlements becomes questionable. As dwellers of shelter units (although formally termed as ‘barracks’ rather shelter or housing) remain on the receiving end of government’s resettlement policies, they often involve with spatial-physical and socio-economic negotiation and assume curious forms of spatial practice, which often upholds contradiction with policy planning. Thus, policy based shelter force dwellers to persistently compromise with their provided built environments both in overtly and covertly. Compromising with prescribed designed space and facilities across living places articulated their negotiation with the quality of allocated space, built form and infrastructures, which in turn exert as less quality of life. The top-down shelter project, Dakshin Chandani Mahal Ashrayan Prakalpa at Dighalia Upazila, the study area located at the Eastern fringe area of Khulna, Bangladesh, is still in progress to resettle internally displaced and homeless people. In terms of methodology, this research is primarily exploratory and adopts a case study method, and an analytical framework is developed through the deductive approach for evaluating the quality of life. Secondary data have been obtained from housing policy analysis and relevant literature review, while key informant interview, focus group discussion, necessary drawings and photographs and participant observation across dwelling, neighborhood, and community level have also been administered as primary data collection methodology. Findings have revealed that various shortages, inadequacies, and negligence of policymakers force to compromise with allocated designed space, physical infrastructure and economic opportunities across dwelling, neighborhood and mostly community level. Thus, the outcome of this study can be beneficial for a global-level understating of the compromising the ‘quality of life’ under top-down shelter policy. Locally, for instance, in the context of Bangladesh, it can help policymakers and concerned authorities to formulate the shelter policies and take initiatives to improve the well-being of marginalized.

Keywords: Ashrayan Prakalpa, compromise, displaced people, quality of life

Procedia PDF Downloads 146
1884 Electrodermal Activity Measurement Using Constant Current AC Source

Authors: Cristian Chacha, David Asiain, Jesús Ponce de León, José Ramón Beltrán

Abstract:

This work explores and characterizes the behavior of the AFE AD5941 in impedance measurement using an embedded algorithm with a constant current AC source. The main aim of this research is to improve the exact measurement of impedance values for their application in EDA-focused wearable devices. Through comprehensive study and characterization, it has been observed that employing a measurement sequence with a constant current source produces results with increased dispersion but higher accuracy. As a result, this approach leads to a more accurate system for impedance measurement.

Keywords: EDA, constant current AC source, wearable, precision, accuracy, impedance

Procedia PDF Downloads 100
1883 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: multi-objective, analysis, data flow, freight delivery, methodology

Procedia PDF Downloads 174
1882 Optimization Process for Ride Quality of a Nonlinear Suspension Model Based on Newton-Euler’ Augmented Formulation

Authors: Mohamed Belhorma, Aboubakar S. Bouchikhi, Belkacem Bounab

Abstract:

This paper addresses modeling a Double A-Arm suspension, a three-dimensional nonlinear model has been developed using the multibody systems formalism. Dynamical study of the different components responses was done, particularly for the wheel assembly. To validate those results, the system was constructed and simulated by RecurDyn, a professional multibody dynamics simulation software. The model has been used as the Objectif function in an optimization algorithm for ride quality improvement.

Keywords: double A-Arm suspension, multibody systems, ride quality optimization, dynamic simulation

Procedia PDF Downloads 134